Wednesday, December 30, 2020

Active Directory and LDAP

Active Directory and LDAP

A directory service provides a hierarchical structure and allows to store objects for quick and easy access and retrieve them later. Active Directory (AD) is Microsoft's own proprietary directory service. It runs on Windows Server and allows administrators to manage permissions and access to network resources and other devices.

Active Directory stores data as objects. An object is a single element, such as a user, group, application or device. It categorizes directory objects by name and attributes. 

Active Directory provides the directory service by using Kerberos Authentication and Single Sign-On (SSO) technology. The Kerberos authentication is a protocol that provides a mechanism for authentication between a client and a server, or between one server and another server.

AD provides a way to organize many users into logical groups and subgroups and helps in providing access control at respective level.

AD structure is made of three tiers such as domains, trees, forests. Objects that are made on AD are grouped into domains. A tree is a collection of one or more domains and a forest is a collection of trees that share a directory schema, logical structure, and directory configuration.

AD have many services such as domain services, certificate services, LDAP services, Directory federation services, rights management. 

In case of AD domain services, it stores data centrally and manages the communication between host and domains. It checks the login authentication.

Similarly, in case of rights management service or called as RMS, it protects sensitive information through encryption and access authentication. It limits access to emails, office documents and web pages.  

LDAP

Lightweight Directory Access Protocol

LDAP is a client server architecture protocol that runs over TCP/IP protocol. It is used to access directory services, like Microsoft's own Active Directory service.

As a directory database, it centrally stores all the user credentials. Other applications and services that connects to LDAP server to validate the user in the background.

Advantages of LDAP

  • It supports Kerberos authentication, Simple Authentication Security Layer (SASL), and Secure Sockets Layer (SSL).
  • LDAP trusts on the TCP/IP stack rather than the OSI stack.

-DR

Basics of SIEM

Basics of SIEM

SIEM or System Information and Event Management is a system that is vastly used in security incident response and management.

Generally, the device is integrated multiple devices and collects events or logs from the system for analysis to detect any threats. 

Now a days its widely used at Security Operation Centre (SOC) for the security threat monitoring. It enables faster response to the threats.

A SIEM device pulls the logs from below kind of devices;

  • Routers
  • Switches
  • Servers
  • Web filters
  • Firewalls
  • Unified Threat Management (UTM)
  • End point Security 
  • Intrusion Prevention System (IPS)
  • Intrusion Detection System (IDS)

The capacity of modern SIEM is defined by EPS (Events per second) analysis capacity. Minimum 10,000 EPS or more can be better implementation strategy. Now a days it became a critical device for corporate to small and medium industry.

SIEM device looks at both event data and contextual data from device logs for analysis, reports. In a systematic approach, it normalizes, aggregates, correlates and analyses the logs received from each device. The SIEM analyst or Security analyst can effectively respond to the security incidents based on the results. It helps in tracking and reporting the security compliance efficiently. The primary feature of SIEM is threat detection, investigation and respond to the threats. 

As well other features and functionality including:

  • Advanced analytics
  • Automation
  • Policy management
  • Threat management
  • Incident prioritization and management
  • Normalization
  • Basic security monitoring
  • Advanced threat detection
  • Forensics & incident response
  • Threat response workflow
  • Log collection
  • Notifications and alerts 
  • Security incident detection
  • Real time threat detection
  • Scalable and centralized solution

SIEM tool OEMs in the market are as Splunk, IBM QRadar, LogRhythm, HP Arcsight etc.

-DR

Tuesday, December 29, 2020

Basics of Data Loss Prevention

Basics of Data Loss Prevention (DLP)

Data loss prevention (DLP) is a mixture of tools and processes used to safeguard sensitive data against lost, misused, or accessed by unauthorized users. DLP tool categorizes confidential and business critical data and identifies violations of policies if there which are pre-defined by organizations. 

If there any violations are identified, DLP imposes remediation with alerts, encryption, and other protective actions to prevent end users from sharing data.

Data loss prevention solution monitor and control endpoint events, filter data streams networks, and monitor data to protect the data at rest, in motion, and in use. 

Some DLP solutions use strong encryption, access control and user behavior analysis.

How to develop DLP Policy

  • The key factor lies in developing the strong DLP policy by the organization itself. Organizations should ready to use and implement new compliance policies such as GDPR, HIPPA, PCI-DSS etc. 
  • An organization should know which data is critical and which are not. 
  • Thus, need to identify and classify the data as per its criticality and need to prevent its misuse. 
  • In some parts it has been noticed that, data loss has been caused by the insiders accidentality or unknowingly.
  • Define the user’s access, privileged access and allocate the roles and responsibility of the users.
  • Spread awareness and educate among the users and other stakeholders on how to safeguard organizational data and stay compliance. 
  • Never save personal data at business systems.
  • Involve management leaders or CISO (Chief Information Security Officer) in the DLP policy strategy.

Saving data, saves reputation, Personally identifiable Information (PII), Intellectual Property (IP) of the organization.

Many OEMs are presently offering their DLP solution such as Symantec, Trend Micro, McAfee, checkpoint, Force point, Code42, secure trust etc.

 -DR


Monday, December 28, 2020

What is spine and leaf architecture

Spine and leaf architecture

Spine and leaf architecture are built of with combining both leaf switches and spine switches. In an architecture Every leaf switch connects to every spine switch in the network fabric. A leaf-spine topology can be layer 2 or layer 3. 

Basically, it’s useful for Data Centres that require more east-west network traffic than north-south traffic. The leaf switches connect to servers and storage devices and spine switches are connected to leaf switches. Leaf switches mesh into the spine, forming the access layer that delivers network connectivity for servers. As leaf switch connects to every spine switch in the network fabric, it minimizes the latency in the network. As well in providing many paths between two network points, all of them became able to carry traffic and thus reduces the possibility of congestion in a large network.

The key advantages of using the leaf- spine technology is as below:

  • Improving Latency in the network
  • Reduces congestion
  • Improved scalability and flexibility

Spine Switch

A spine switch typically supports the architecture. It supports CLOS architecture. CLOS architecture is a multistage circuit switching network. For Data Centre a spine switch now a days comes with virtualization capacity which enables multiple logical device connectivity or VXLAN. They support DCBX and PFC features in all of its ports.

In layer 2 switching, it Supports LACP, STP, Jumbo Frames, IGMP, MLD, LLDP and LACP. In layer 3 routing, it supports Static routing, RIP, OSPF, GBP, IPV4, IPV6, Policy based routing, ECMP, DHCP. A leaf switch also have similar kind of protocol supports during operation.

Spine switch architecture and features:
  • It Shall have dedicated management module slots in addition to the interface modules.
  • It shall be designed as fully distributed architecture with separation of data and control planes to deliver enhanced   fault tolerance with zero service disruption during planned or unplanned events.
  • It Shall have routing/switching capacity of minimum of 30 Tbps or more and wired speed non-blocking forwarding performance.
  • Shall have the capability to extend the control plane across multiple active switches.
  • making it a virtual switching fabric or equivalent feature, enabling interconnected switches to aggregate the links.
  • Shall have access Control Lists for both IPv4 and IPv6 for filtering traffic to prevent unauthorized users from accessing the network.
  • Shall support Port-based rate limiting and access control list (ACL) based rate limiting.
  • Shall support Weighted Random Early Detection (WRED) / Random Early Detection (RED) for congestion avoidance.
  • Shall have ports to support both 40G/ 100G and  100G QSFP+ equally distributed across interface modules.
  • Port mirroring to duplicate port traffic (ingress and egress) to a local or remote monitoring port as a management feature needed at spine switches.
  • Shall support ARP attack protection to protect against attacks that use a large number of ARP requests as well as Packet storm protection to protect against unknown broadcast unknown multicast, or unicast storms with user-defined thresholds.

-DR

Sunday, December 27, 2020

D2D Data Protector basics

D2D Data Protector Solutions 

D2D is known is Disk to Disk storage or Disk to Disk method for backup. This means with D2D device, a hard disk is backed up to another hard disk rather than a Tape.

These are random access storage. It does not offer sequential or linear storage solution. It can send and receive multiple concurrent data streams.

HP has made the Store-Once which provides the D2D solutions. It provides disk-based data protection for data centres and remote offices. The performance of D2D solution is dependent upon data set type, compression levels, number of data streams, number of devices connected and number of concurrent tasks.

It comes with a feature of de-duplication of data. Data de-duplication is a method of reducing storage needs by minimizing redundant data so that over time only one unique instance of the data is actually retained on the disk. This helps in keeping data for longer period of time. De-duplication works by examining the data stream when it reaches at the storage appliance, checking for blocks of data that are identical and eliminating redundant copies. If duplicate data is found, a pointer is established to the original set of data as conflicting to actually storing the duplicate blocks, removing or "de-duplicating" the redundant blocks from the volume.

There are some more benefits such as:

  • It helps in more efficient use of disk space effectively that reduces the cost-per-gigabyte of storage.
  • Here file restores fast and easy from multiple available recovery points.
  • By extending data retention periods on disk, the backup data is more accessible for longer periods of time.
  • D2D helps in improve the throughput as it can run multiple backup in parallel. 
  • Data replication provides efficient disaster recovery.
  • D2D Backup Systems are capable of supporting both VTL and NAS targets for backup applications on a single platform.
  • There may be no longer need to store daily incremental backups on tape.
  • The D2D Backup Systems work with your backup application to help automate and improve the backup process while reducing the time spent in managing the protection of data.
  • Provides effective capacity upgrade and scalability.
  • Have multiple OS support, network compatibility including 1Gb or 10 Gb Ethernet connectivity.
  • Compatibility with SAN, Fibre channel switch.
  • Systems offer hardware-based RAID 5 or RAID 6 to reduce the risk of data loss due to disk failure.

This is just basic knowledge about the solution. Please refer OEMs/ Manufacturers for more details.

NB- This blog/post not includes any paid promotion. 

-DR


Thursday, December 24, 2020

Basics of DCBX

What is DCBX

DCBX is known as Data Centre Bridging Capability Exchange. This is one type of ethernet extension used to improve the networking and management at Data Centres. It discovers peers and exchange configuration information between DCB compliant bridges. 

DCBX works with LLDP to permit switches to conversation information about their data centre bridging (DCB) capabilities and configuration and automatically exchange common Priority Based Flow Control.

Data is exchanged in type-length-value (TLV) format as in LLDP. For DCBX to function on an interface LLDP must be enabled on that interface. 

DCBX permits auto exchange of ethernet parameters between switch and other end point devices.

It has a prime feature as priority-based flow control (PFC) which provides capability to manage single traffic source (burst signals) on a multiprotocol link.

It helps in identifying congestion in a network, logical link down, mismatched configuration etc. 

-DR

What is LLDP

LLDP

Link Layer Discovery Protocol

The Link Layer Discovery Protocol (LLDP) is a vendor neutral link layer protocol used by network devices for publicizing their identity, capabilities, on a LAN network. It is based on IEEE 802 technology, and wired Ethernet. This is like cisco discovery protocol (CDP).

Information collected with LLDP protocol can be stored in the device management information base (MIB) and queried with the Simple Network Management Protocol (SNMP). It can be used at network management and monitoring applications. 

LLDP information is sent through frames that contains LLDP Data unit (LLDPDU). Each LLDPDU is a sequence of Type-length value (TLV) structures. Each LLDP frame starts with Chassis ID, Port ID and TTL. 

Benefits of LLDP

  • It Shortens the use of NMS tools in a multiple vendor environment.
  • Helps in accurate discovery of physical network topologies that again simplifies troubleshooting network issues.
  • Offers device capability and supports optional system name and description, and management address.
  • Helps in discovering devices which are not configured. LLDP is disabled by default and can be activated through the lldp run command.

-DR

What is NVGRE

NVGRE

Network Virtualization using Generic Routing Encapsulation.

A Network virtualization is a software defined networking process to consolidate hardware and software into a single virtual network.

NVGRE is an advanced network virtualization method that uses encapsulation and tunneling to provide subnets with large numbers of virtual LANs or VLANs. 

In NVGRE, the virtual machine's packet is encapsulated inside another packet. The header of this new, NVGRE-formatted packet has the appropriate source and destination provider area IP addresses and it has a 24-bit Virtual Subnet ID (VSID), which is stored in the GRE header of the new packet.

NVGRE can cover both layer 2 (data link layer) and layer 3 (network layer), so providing subnets with VLANs can enable multi-tenant and load-balanced networks to be shared across on-premises and cloud environments. 

It aims to solve the problems caused by a limited number of VLANs which fail to work in complex virtualizations. 

NVGRE is like VXLAN (as discussed at earlier post), however NVGRE is primarily supported by Microsoft whereas VXLAN is presented by Cisco. If we can distinguish the difference, both technologies offer fixed VLAN size i.e. 4096 virtual networks while creating up to 16 million virtual networks. 

-DR

Wednesday, December 23, 2020

Basics of NAT

Basics of NAT or Network Address Translation

Network address translation is a technique of remapping an IP address space into another by modifying network address information in the IP header of packets. It can be done when traffic is on move or in transit.

It permits private IP networks that use unregistered IP addresses to connect to the Internet.

It allows a single device, such as a router, to act as an agent between the Internet network and a local network or private network, which means that only a single unique IP address is required to represent an entire group of computers to anything external of their network.

The main purpose of its use is to limit the number of public IP addresses an organization must use, for both economy and security purposes. NAT also allows to connect to a TCP/IP network using a Token Ring adapter on the host machine.

NAT can be used to allow limited access based on selection to the outside of the network. Computers requiring special access outside the network can be assigned specific external IPs using NAT and allowing them to communicate with other computers and applications that require a specific public IP address. 

Types of NAT

Static NAT: When the local IP address is converted to a public IP address, the address remains same is called static NAT.

Dynamic NAT: In dynamic NAT, instead of choosing same IP address, the NAT chooses from a pool of public IP address. So, the device gets different IP address each time. 

Further NAT can be designed in a network through the below types-

  • Full cone NAT
  • Restricted cone NAT
  • Port restricted cone NAT
  • Symmetric NAT

Organization can also use a NAT gateway, for managed NAT service that provides better availability, higher bandwidth, and requires less administrative effort. 

Overloading - A special case of dynamic NAT that maps multiple unregistered IP addresses to a single registered (globally unique) IP address by using different port numbers. Dynamic NAT with overloading is also known also as PAT (Port Address Translation).

Overlapping - This occurs when your internal IP addresses belong to global IP address range that belong to another network. In such case, the internal IP addresses need to be hidden from the outside network to prevent duplication. NAT overlapping allows the use of internal global addresses by mapping them to globally unique IP addresses using static or dynamic NAT.

Advantages

  • NAT allows several hosts be connected to Internet by using fewer globally unique IP addresses. This in turn results in conserving the scarce public IP addresses. The terms public / global is used in the sense that the IP addresses are globally unique and officially registered. 
  • NAT supports load sharing on inside machines. The inside machines are accessed in a round robin fashion, thus sharing load.
  • NAT offers some degree of security since IP addresses are not easily traceable. This is because, the actual host IP that is accessing the Internet is translated into outside IP address and vice versa. Thus, NAT offers protection against hacking.
Disadvantages

One disadvantage of NAT is that it increases delay. This is obvious since address translation is involved. Another disadvantage of NAT is that, when an application uses physical IP address, it may not function properly. This is because the physical IP address is changed by NAT.



-DR


Thursday, December 17, 2020

Basics of SDDC

Software Defined Data Centre (SDDC)

Software Defined DC uses virtualization technologies to separate H/W into Virtual Machines. The goal is to centrally control all the components of a Data Centre. 

By virtualizing a DC all the resources of the system like compute, network, storage can be represented into a software form. Automation and orchestration also came to play here in SSDC technology. 

Different software platforms open and proprietary can be used to virtualize computing resources. 

Components of SDDC

Software Defined Networking

Software Defined Storage

Virtual Machines  

These resources can be offered as infrastructure as a service (IAAS).

Stakeholders/ customers don’t have to build the infrastructure which reduced the cost. Centralized operations to serve many customers from a single hub. Its basically new enterprise level computing technology. 


-DR

Green Data Centre

Green Data Centre

Just a conceptual overview

The Key issue for modern data centre (DC) is the amount of power drawn for its operation and to run its facilities. In recent years, the cost of power for the DC has exceed the cost of the original capital investment.

For higher power-density facilities, electricity costs account for over 10% of the total cost of ownership (TCO) of a data centre. In addition, data centre use diesel generators for backup power and major energy sources that heavily depend on coal production to produce electricity, contributing to the environmental hazard of running large, power hungry facilities. Information and communications technologies contribute 2% of global carbon emissions, with data centre accounting for 13.8% of that mark.

At this trend, Greenhouse Gas (GHG) emissions from data centre are projected to more than 1.5 times in 2020 than that as of now.

The reason behind building green data centre is summarized as below;
  • Energy Efficiency = Lower energy
  • Greater return on investment= More investment capital
  • Less use of resources = Lower environmental Impact


Innovations

Optimized Airflow Assessment for Cabling:
Replace cabling systems with high-performance fiber transport systems.
Yields improved cooling and reduced energy usage across DC.

Scalable Modular Data Centre
Rapid deployment in 8-12 weeks
Provides ready racking, power, cooling, security and monitoring.
500 & 1,000 square foot DC at 15% lower than traditional DC.

Thermal Analysis for High Density Computing
Clients can identify and resolve existing and potential heat-related issues.
Helps avoid outages and provide options for power savings and expansion.

-DR




Wednesday, December 16, 2020

Basics of MACsec

Basics of MACsec

Media Access Control Security

Media Access Control security (MACsec) offers point-to-point security on Ethernet connection. It is defined by IEEE standard 802.1AE and works in layer-2 encryption technology. It can be used in combination with other security protocols, such as IP Security (IPsec) and Secure Sockets Layer (SSL), to provide end-to-end network security. 

It is used at ethernet networks, WAN routers, LAN switches, Data Centre routers and switches, Servers, router to switch, switch to switch, server to switch and end devices.  

For end-to-end security, data needs to be secured when at rest (stored in a device) and when in motion (communicated between connected devices). 

After configuring and enabling MACsec, when data is in motion or transit, security added in communication layer and one bi-directional secure communication link is established and is combined with data integrity check and encryption. 

MACsec can identify and preventing most security threats such as denial of service (DoS), intrusion, man-in-the-middle attack, etc.

MACsec have three security modes:
  • Static connectivity association key (S-CAK)
  • Static secure association key (S-SAK)
  • Dynamic secure association key (D-SAK)
Benefits of using MACsec:
  • Device to device security
  • Confidentiality
  • Data origin authenticity
  • Data integrity
  • Replay protection
  • Deployment flexibility
-DR

Tuesday, December 15, 2020

Basics of SAN, NAS and Storage

Basics of SAN (Storage Area Network)

A Storage Area Network (SAN) is a computer network that provides network access to storage. The storage is designed in a block level data storage solution. It has high speed architecture and connected the LUN (Logical Disk Units) of servers. They are resilient and help in removing single point of failure.

SANs are typically configured of hosts, switches and storage devices that are connected each other using technologies and protocols. SANs are commonly based on Fibre Channel (FC) technology that utilizes the Fibre Channel Protocol (FCP) for open systems. Several disks and tape libraries are interconnected in a SAN system.

There are several types of interfaces used for SAN:

  • FC (Fibre Channel) 
  • ISCSI (Internet small computer systems interface)
  • FCoE (Fibre channel over ethernet)
  • FC-NVMe (Non-Volatile Memory Express over Fibre Channel)

NAS

Similarly, NAS (Network Attached Storage) manages the storage centrally by sharing with other attached servers. But NAS works on ethernet only. It stores data as file system. NAS supports protocols such as NFS (Network File System) and Common internet file services/ server message block (SMB).

SAN and NAS both provide shared, centralized external storage which can be accessed by other devices over a network at the single time. 

A storage and Controller

A storage system have all combination of Motherboard, CPU, DRAM, Memory, Network Control Card etc. All these components remain in a single frame known as controller. Controller is the brain you can can say. The storage controller is also known as head.

The storage system have disk sub system where it have disks, to store data. The disk sub system is being kept in same chassis or it can be kept at outside as external disk shelves. 

Like, system and server a Storage system requires operating system to run. That OS is kept in the controller to control the communication out to the clients over the network and also to disk shelves. Those OS are proprietary to the manufacturer. 

Benefits of using SAN and NAS: 

  • The benefits of using extra High Performance, scalability and availability. They increase storage utilization and improve data protection and security.
  • Rather than direct attached storage it offers improved disk utilization.
  • Devices and applications can be allocated storage as required and later it can altered.
  • The external storage offers greater performance and capacity as data can be striped across many disks in an enterprise class storage system.
  • This cutting edge technology offers resiliency that means, there's no single point of failure for the mission critical data available. If due to any incident, and any single component fails, there's another component or another backup to take its place immediately.
  • Centralized storage management
  • Storage tiering (Storage tiering is a key aspect of a storage tiering architecture that offers different technologies and is to classify data into levels of importance and assign it to the appropriate storage tiers for future retention. The organization shall classify its data accordingly. The older data can move to lower tier or archived than the newer data). 
Lets look a general difference between SAN and NAS in a brief:


 

There are different types of storage media such as hard disk drives (HDD), that are available for both direct attached storage in your servers and for external SAN and NAS storage systems. Another type is known as a solid-state drive (SSD). SSDs are newer, faster technology than HDD technology as it has got no platter or moving parts. For which it works speedily. The SSD are also known as flash in enterprise storage environments. So, if someone says we need flash storage for latest technology, it’s nothing only SSD. SSD drives provide the highest performance, so they are expensive now with respect to the size of SSD, whether its 1GB or 2GB.

Storage Array:

A storage array is a data storage system for block-based storage, file-based storage, or object based storage. It is also called disk array.  Instead of storing data on a server, storage arrays uses multiple drives in a collection of HDD or SSD disks capable of storing a big amount of data and are managed by a central management system.

-DR



Basics of Virtualization

Basics of Virtualization

Virtualization consists of allowing one set of hardware to host multiple virtual machines. It is in use at most large corporations, and it is also becoming more common at many small organizations, businesses.

It means the creation of virtual resources (memory, storage, processor) rather than actual resources like hardware, OS, etc.

We can use virtualization at our home pc (personal computer). We can use one or more virtual machine in a single system. Virtual machines with different operating system can run on the same physical machine or server.

The virtualization software sits directly on the top of the hardware or bare metal. It allows guest operating system such as windows server, linux (Red Hat, Ubuntu, Cent OS).

A common example of virtualization is like:-

Partition of the hard drive to create separate hard drives. 

There are different types of virtualization takes place such as;

  • Desktop Virtualization
  • Application Virtualization
  • Server Virtualization
  • Storage Virtualization
  • Network Virtualization

Hypervisor: 

The application or software or hardware through which virtualization can be done or virtual machine (VM) can be created, run or managed is called a hypervisor. Many products such as VMware, Oracle virtual box, Microsoft hyper-v products offer hypervisor.

Host: The machine or the physical system who is running the VM is called the host. 

Guest OS: Operating systems running on the host are called guest. Most hypervisors support 32-bit and 64-bit.

There are two types of hypervisors, Type-1 and Type-2

Type 1 hypervisors run on the system directly and sometimes called as bare metal hypervisor. They don’t need another OS to run within.

Type-2 hypervisors run as software within another OS. 

Container based hypervisor: Container based hypervisor are those run services and applications in separate isolated containers within an OS. Each container is independent with each other to avoid interference. Machine’s OS and Kernel both run the services or applications within the container.

The prime benefit of container-based virtualization is it uses less resource and have more efficient and speed. But the container has to use the OS of the host system only. 

It automates the orchestration, monitoring and management of log, load balancing, virtual network management, security patch management, security tools and controls.

Advantages of Virtual Machines (VM)

  • Run operating systems where the physical hardware is unavailable.
  • Easier to create new machines, backup machines, etc.
  • Software testing using “clean” installs of operating systems and software.
  • Emulate more machines than are physically available.
  • Timeshare lightly loaded systems on one host.
  • Debug problems (suspend and resume the problem machine).
  • Easy migration of virtual machines (shutdown needed or not).
  • Run legacy systems.
  • Hardware-independence of operating system and applications.
  • Virtual machines can be provisioned to any system.

VirtualBox

VirtualBox is an Oracle application that is used to run an operating system inside another operating system. It is a powerful x86 and AMD64 / Intel64 virtualization product for enterprise as well as for home use.

This is the best example of virtualization technology where the software works as hypervisor. The OS you want to use over the old OS known as guest operating system whereas, the existing OS is known as Host Operating system.

This Oracle VirtualBox is free to use. You can download it from virtualBox.org site. As per your requirement, you can download for your Windows system, Linux system or MAC system.



-DR

Monday, December 14, 2020

Basics of Cloud Computing

Basics of Cloud Computing

Cloud computing is an internet based service that provides on demand access to shared computer.

Cloud computing is hosting, computing services and data storage on the Internet cloud instead of hosting it locally and directly manage by user. It was derived generally from distributed computing. It can be further distinguished in to IAAS, PAAS, SAAS service models as per its implementation and roll out worldwide. 

Example of cloud services: 

  • Microsoft one drive, Google drive, Drop box: Used to save images, videos, documents at cloud.
  • Office 365: working on word, excel, power point on online mode. 

There are many vendors provide cloud service now a days such as Amazon web services (AWS), Oracle cloud, Google Cloud, Microsoft Azure, VMware cloud etc. 

Service Models

Iaas (Infrastructure as a service):

The Infrastructure as a Service (IaaS) model utilizes virtualization or virtual infrastructure and customer pay the CSP (cloud service provider) for the resources used. Because of this, the IaaS model closely resembles the traditional utility model used by electric, gas, and water providers. 

Paas (Platform as a service):

The Platform as a Service (PaaS) model is also known as cloud platform services. In this model, vendors allow multiple applications to be created and run on their infrastructure for different kind of services. One example is Amazon Web Services (AWS).

Saas (Software as a service):

The Software as a Service (SaaS) model is when users generically think of cloud computing. In this model, applications are remotely run over the web. 

One of the advantages is that no local hardware is required, and no software applications need to be installed at own premise. The costing is calculated as per the subscription. We can say, this helps to avoid the upfront cost and complexity of maintaining the infrastructure. Again, scaling is another benefit of using cloud computing, you can use memory, storage, processor as per your business requirement. 

Summary of Benefits of using cloud computing: Speed, productivity, performance, reliability, cost and security.

There are three type of cloud used: public, private and hybrid.

Public cloud is operated by third party organizations. Private cloud is operated by single business owners and it can be located at its own DC. Hybrid cloud are combination of both public and private cloud features. 

Similarly it is divided for deployment model such as 

  • Private Cloud
  • Public Cloud
  • Hybrid Cloud
Private Cloud model:
  • It have single tenant implementation.
  • Owned and operated by IT organization.
  • Firm have their own data management policy in place.
  • Self service and automation capacity.
Public Cloud
  • It have multi tenant implementation.
  • Public cloud is owned and operated by service provider ideally.
Hybrid Cloud
 
Combination of both private or more public clouds is known as hybrid cloud. 

Benefits:

Cloud computing enables companies and applications, which are system infrastructure dependent, and to be infrastructure-less.

By using the Cloud infrastructure on “pay as used and on demand”, all of us can save in capital and operational investment.

Advantages of cloud Computing:

Lower Computer Costs
  • You do not need a high-powered and high-priced computer to run cloud computing's web-based applications. 
  • Since applications run in the cloud, not on the desktop PC, your desktop PC does not need the processing power or hard disk space demanded by traditional desktop software. 
  • When you are using web-based applications, your PC can be less expensive, with a smaller hard disk, less memory, more efficient processor.
  • In fact, your PC in this scenario does not even need a CD or DVD drive, as no software programs have to be loaded and no document files need to be saved.
Improved Performance
  • With few large programs hogging your computer's memory, you will see better performance from your PC. 
  • Computers in a cloud computing system boot and run faster because they have fewer programs and processes loaded into memory.
Reduced software cost
  • Instead of purchasing expensive software applications, you can get most of what you need for free.
Reliability
  • Present cloud computing offer minimum or zero downtime in continuous operation to meet the exact SLA in the business.
Scalability
  • Applications in the cloud can be scaled either vertically or horizontally. In case of vertical scaling, the computing capacity is being increased by increasing the RAM, CPU to the virtual machine (VM) and in case of horizontally scaling, the computing capacity can be increased by adding the no of instances or we can say increasing more VMs. 
Disaster Recovery
  • The data hosted at cloud can be replicated and a its back up can be taken in the cloud for safe guarding your data that can be accessed in case of any disaster or system crash situation. 
RaaS: Recovery as a Service

A new model, even introduced keeping eye on the disaster recovery. In this cloud model, an organization keeps its data and IT infrastructure backup with a third party cloud computing party. RaaS solutions allow companies to recover applications and data such as files and databases in case of disaster. These service provide disaster recovery, backup and business continuity so that an organization can reduce downtime due to some major failure such as a hurricane, earthquake, fire, flood etc. In view of some tech firms, its also known as Disaster Recovery As A Service (DRaaS). A DRaaS mirrors a complete infrastructure in fail-safe mode on virtual servers, including compute, storage and networking functions. Again it comes with three models such as Managed DRaaS, Assisted DRaaS and Self-Service DRaaS. 


-DR

Monday, December 7, 2020

Basics of TLS/SSL

Basics of TLS/SSL

Transport Layer Security (TLS), Secure Sockets Layer (SSL) are industry-standard protocols used to protect communication over the Internet. They establish authentic and encrypted links between the network.

TLS and SSL both use public-key and symmetric cryptography to allow all end users to be ensure of a web server’s identity and preserve all communication as private (the interactions between the end user and the web server). These protocols are most commonly used to provide privacy for sensitive information, like passwords or credit card numbers.

The initial communication between an end user and a web server is referred to as the “SSL handshake.” In that handshake, the web server sends its certificate to the browser. The browser authenticates the validity of the certificate and the legitimacy of the web server. After validation check, a secure connection is established between the devices. 

Many times, we have seen the notation like “https://” in a link or in a browser’s address bar as well as a "Padlock" symbol which donates that TLS or SSL is being used. For example, we have an HTTP website and we want to allow HTTPS secured access to it. This means we need a PKI certificate, and we can configure SSL or TLS.

Both SSL and TLS use PKI certificates and with PKI certificates they allow for encryption or data confidentiality and further allows for digital signatures and hashing.

How it works

TLS and SSL require the web server to have a digital certificate which are generally obtained from a Certificate Authority or called trusted CA. 

The web server sent its TLS or SSL certificate to the browser. The browser and web server exchange the information cryptographically to prove that the web server is in fact the one named in the SSL certificate. And, the browser verifies that the web server’s certificate is indirectly signed by a CA whose root certificate is trusted by the browser. 

An SSL Certificate issued by a CA to an organization/firm and its domain/website confirms that a trusted third party has authenticated that organization’s identity.

Its important to be used at financial transaction sites and sites used for confidential and personal data interchange for safety. 

Benefits of using SSL:

  • Encrypt Sensitive data
  • Activate HTTPS and Lock pad at browser
  • Comply with PCI standards
  • prove legitimacy
  • Strengthen brand identity
  • Increase SEO Rank

-DR

Network Scanning Tools

Network Scanning through Nmap and Nessus Network scanning is a process used to troubleshoot active devices on a network for vulnerabilities....