virtualization – Network Interview https://networkinterview.com Online Networking Interview Preparations Mon, 30 Jun 2025 13:04:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://networkinterview.com/wp-content/uploads/2019/03/cropped-Picture1-1-32x32.png virtualization – Network Interview https://networkinterview.com 32 32 162715532 What is a Virtual Firewall? 3 Virtual Firewall Use Cases https://networkinterview.com/what-is-a-virtual-firewall-3-use-cases/ https://networkinterview.com/what-is-a-virtual-firewall-3-use-cases/#respond Mon, 30 Jun 2025 07:32:20 +0000 https://networkinterview.com/?p=21145 Firewalls have evolved a lot since their inception. The gatekeeper or epitome of perimeter security used to enhance network security. Initial days firewalls were simple packet filters which examined packets of information passing through them and blocked which did not meet the predetermined criteria. Over a period of time as cyber attacks become more sophisticated, firewall technology also becomes more advanced from stateful inspection firewalls to Next generation firewalls. 

In today’s topic we will learn about virtual firewalls and three use cases of virtual firewalls in detail. 

About Virtual Firewall

A virtual firewall provides network security for virtualized environments such as cloud. Virtualization process allows creation of multiple virtual instances of a physical device or a server and allows more efficient utilization of underlying physical resources and more flexibility for network management. Virtualization technologies brought some new set of security risks as well such as unauthorised access to virtual resources and increased data breaches.

The virtual firewalls become the gatekeeper or keeper of perimeter security again like their physical avatars. Virtual firewalls operate at the virtualization layer and protect virtual machines (VMs) or any other virtualized resources in cloud networks. Virtual firewalls provide additional functions such as VPN connections, intrusion detection and prevention and malware protection.  

 

Virtual firewalls secure cloud deployments and so they are also called cloud firewalls. They can scale with virtual environments and protect against north-south traffic and allow fine grained network segmentation within virtual networks. 

Benefits of using a Virtual / Cloud Firewall

  • Cloud native virtual firewalls centralize security and apply policies consistently to all virtual machines and applications
  • Virtual firewall upgrades are easier compared to management and upgrades of physical firewalls
  • Virtual firewalls are safest way to quickly rollout cloud applications 
  • More cost effective as compared to their physical counterparts
  • Provide cloud native threat detection and prevention capabilities to secure data and applications.

Virtual Firewall Use Cases 

Use Case 1: Securing Public Clouds 

Public clouds such as Google cloud platform (GCP), Amazon web services (AWS) and Microsoft Azure host virtual machines to support different types of workloads, virtual firewalls secure these workloads. 

Virtual firewalls are deployed to implement advanced security capabilities such as threat detection and segmentation to isolate critical workloads to meet regulatory requirements such as GDPR, HIPAA, PCI-DSS etc.

To secure flow of traffic moving laterally within cloud networks Virtual firewalls implement inline threat prevention mechanism.

Use Case 2: Security Extension to branches and SDNs

Virtual firewalls help in securing systems at branch offices and for software defined networks. In SDN environments data routing and networking is controlled with software virtualization. Deployment of virtual firewalls in SDN environments allow organizations to secure their perimeter, segmentation of network and extend protection to remote branches.

Advanced firewalls in SDN networks provide consistent network security and help to manage branch network security from a centralized console, segmentation of networks to support isolation, secures the live network flow and sets the stage for secure migration of applications to cloud. 

Use Case 3: Protection of Cloud Assets 

Virtual firewalls enhance security of private cloud assets. They come with policy based, auto provisioning of security capabilities for networks and help in securing private cloud assets quickly and support in workload isolation from one another. 

]]>
https://networkinterview.com/what-is-a-virtual-firewall-3-use-cases/feed/ 0 21145
Cisco ThousandEyes: A Comprehensive Platform Overview https://networkinterview.com/cisco-thousandeyes/ https://networkinterview.com/cisco-thousandeyes/#respond Thu, 19 Jun 2025 16:18:14 +0000 https://networkinterview.com/?p=22143 Cisco ThousandEyes is a comprehensive platform for measuring, monitoring, and troubleshooting network performance. It is a cloud-hosted platform that helps organizations ensure reliable and secure user experience and network performance across the globe. ThousandEyes provides insights into the performance and health of applications, websites, networks, and cloud services. It also offers visibility into the entire infrastructure, including the public internet and private networks. With ThousandEyes, organizations can visualize the performance of their networks and monitor the user experience and application performance.

What is Cisco ThousandEyes?

Cisco Thousandeyes is a cloud-based platform that allows businesses to measure, monitor, and troubleshoot network performance. It provides a comprehensive view of the entire infrastructure, including public and private networks. ThousandEyes offers visibility into the performance and health of applications, websites, networks, and cloud services. It helps organizations ensure reliable and secure user experience and network performance across the globe.

ThousandEyes provides insights into the performance of applications, websites, and networks, as well as the health of cloud services. It offers network intelligence, visibility, and analytics, allowing organizations to monitor the user experience and application performance. ThousandEyes also provides tools for troubleshooting and diagnosing network performance issues, allowing businesses to quickly identify and solve problems.

Benefits of Cisco ThousandEyes

  • Improved network performance: Thousandeyes provides insights into the performance of applications, websites, and networks, as well as the health of cloud services. This allows organizations to monitor the user experience and application performance, and ensure reliable and secure network performance.
  • Comprehensive visibility: ThousandEyes provides a comprehensive view of the entire infrastructure, including public and private networks. This allows businesses to visualize the performance of their networks and identify potential performance issues.
  • Real-time insights: ThousandEyes provides real-time insights into application and network performance. This allows businesses to quickly identify and troubleshoot performance issues.
  • Easy to use: ThousandEyes is easy to use and provides a user-friendly interface. This makes it easy for businesses to monitor and troubleshoot network performance.

Features

ThousandEyes provides a range of features to help businesses measure, monitor, and troubleshoot network performance. Some of the features include:

  • Network monitoring: It provides network monitoring capabilities, allowing businesses to visualize the performance of their networks and identify potential performance issues.
  • Application monitoring: It provides application monitoring capabilities, allowing businesses to monitor the performance of applications and websites.
  • Cloud monitoring: It provides cloud monitoring capabilities, allowing businesses to monitor the performance of cloud services.
  • Troubleshooting: It provides tools for troubleshooting and diagnosing network performance issues.
  • Analytics: It provides analytics capabilities, allowing businesses to track performance trends and identify potential issues.
  • Visualizations: It provides visualizations of performance data, allowing businesses to quickly identify and troubleshoot performance issues.

ThousandEyes Platform Architecture

ThousandEyes is built on a distributed architecture. It is designed to be highly available and scalable, allowing businesses to monitor and troubleshoot network performance in real time. The platform is composed of several components, including:

  • Agents: are installed on customer sites and collect performance data.
  • Data collectors: are responsible for collecting performance data from the agents and sending it to the Thousandeyes platform.
  • Platform: is responsible for collecting, storing, and analyzing performance data.
  • Dashboards: provide visualizations of performance data, allowing businesses to quickly identify and troubleshoot performance issues.

ThousandEyes Platform Pricing

ThousandEyes offers a range of pricing plans, depending on the features and services needed. The pricing plans range from basic to enterprise, and the prices vary depending on the number of agents and data collectors needed.

It also offers a free trial for businesses to test the platform. The free trial allows businesses to use the platform for 30 days and access all of the features.

Thousandeyes Platform Integration

ThousandEyes integrates with a range of third-party applications and services, allowing businesses to monitor and troubleshoot network performance in real time. The platform integrates with popular services such as Amazon Web Services (AWS), Microsoft Azure, Rackspace, Google Cloud Platform (GCP), and more.

It also integrates with popular analytics and reporting tools such as Splunk, Grafana, and Kibana. This allows businesses to track performance trends and identify potential issues.

Use Cases

Thousandeyes can be used by businesses of all sizes to measure, monitor, and troubleshoot network performance. The platform can be used to monitor the performance of applications, websites, networks, and cloud services. It can also be used to troubleshoot performance issues and track performance trends.

Thousandeyes can be used by businesses in a variety of industries, including:

  • IT and telecom: It can be used by IT and telecom companies to monitor the performance of their networks and ensure reliable and secure user experience.
  • Retail: It can be used by retail companies to monitor the performance of their websites and applications, and identify potential performance issues.
  • Manufacturing: It can be used by manufacturing companies to monitor the performance of their networks and identify potential performance issues.
  • Healthcare: It can be used by healthcare companies to monitor the performance of their networks and ensure reliable and secure user experience.

Comparisons between Thousandeyes and other similar platforms

Thousandeyes is similar to other performance monitoring and troubleshooting platforms, such as AppDynamics, Dynatrace, and New Relic. However, there are some key differences.

  • AppDynamics focuses on application performance and provides a comprehensive view of application performance. ThousandEyes, on the other hand, provides a comprehensive view of the entire infrastructure, including public and private networks.
  • Dynatrace focuses on cloud performance and provides insights into the performance of cloud services. ThousandEyes, on the other hand, provides insights into the performance of applications, websites, networks, and cloud services.
  • New Relic focuses on application performance and provides analytics capabilities. ThousandEyes, on the other hand, provides analytics capabilities, as well as tools for troubleshooting and diagnosing network performance issues.

Services and Support

ThousandEyes provides a range of services and support to help businesses get the most out of the platform. The services and support include:

  • Professional services: It provides professional services to help businesses set up and configure the platform.
  • Training: It provides training to help businesses learn how to use the platform.
  • Support: It provides 24/7 support to help businesses troubleshoot and diagnose network performance issues.
  • Documentation: It provides comprehensive documentation and tutorials to help businesses get the most out of the platform.

Conclusion

Cisco ThousandEyes is a comprehensive platform for measuring, monitoring, and troubleshooting network performance. It provides a comprehensive view of the entire infrastructure, including public and private networks. ThousandEyes offers network intelligence, visibility, and analytics, allowing businesses to monitor the user experience and application performance.

It also provides tools for troubleshooting and diagnosing network performance issues, allowing businesses to quickly identify and solve problems. ThousandEyes is easy to use and integrates with a range of third-party applications and services, making it an ideal choice for businesses of all sizes.

]]>
https://networkinterview.com/cisco-thousandeyes/feed/ 0 22143
Devops vs Sysops: Understand the difference https://networkinterview.com/devops-vs-sysops-understand-the-difference/ https://networkinterview.com/devops-vs-sysops-understand-the-difference/#respond Tue, 26 Nov 2024 12:13:01 +0000 https://networkinterview.com/?p=16531 Introduction to DevOps & SysOps

Technology advancements are crucial to the dynamic IT landscape in today’s times. Cloud computing has been crucial which presents excellent opportunities to business for the future. SysOps and DevOps are commonly used terminologies in cloud computing.

In the past times organizations hired multiple personnel to perform different set of activities however as the cloud computing came into existence the job role become simpler and administrators have flexibility to support developers in the process of building applications without or lesser defects which otherwise got missed or ignored due to lower weightage in terms of application functionality. Similar way SysOps also found a recognition for business to align with certain standards or frameworks.

Today we look more in depth about DevOps and SysOps terminologies and understand how they could help businesses in bringing agility in delivery and time to market.

About DevOps

DevOps is the commonly used terminology in Cloud computing world. The focus of DevOps is tasks such as development, testing, integration, and monitoring. DevOps uses opensource and cross platform tools like Chef and Puppet for delivery of system configuration and automation. The administrators deal with infrastructure building tasks in DevOps as well as developers have access to the concerns of continuous deployment through automation of build tools.

Features of DevOps

  • Reduction in implementation time of new services
  • Productivity increase for enterprise and IT teams
  • Saves costs on maintenance and upgrades
  • Standardization of strategies for easy replication and quick deliveries
  • Improves quality, reliability, and reusability of system components
  • Rate of success is increased with digitization and transformation projects

 

About SysOps

SysOps generally deals with monitoring and justifying the right cloud services with best practises. The SysOps is a modern approach that supports monitoring, management and operations of infrastructure systems. SysOps is useful in troubleshooting of issues emerging during operations. SysOps is based on IT service management (ITIL) framework which concentrates on aligning business goals to IT services. ITIL enables organizations to form a baseline from which they can design, execute and measure the effectiveness. It shows compliance and estimated improvement.

Comparison Table: DevOps vs SysOps

Below given table summarizes the difference between DevOps and SysOps:

FUNCTION

DevOps

SysOps

Definition DevOps is a collaboration between software development and IT teams SysOps is an administrator of cloud services which handle some or most of the tasks relayed to software development environment
Approach Adaptive approach by breaking down complex problems in small iterative steps Consistent approach to identify and implement changes to systems
Aim Acceleration of software development process by bringing development and IT teams together SysOps aim is to manage all key responsibilities of the IT operations in a multi user environment
Delivery Methodology Compliance with principles for seamless and stable collaboration and coordination between development and operations team Compliance with ITIL for services delivery and focuses on alignment of business objectives with IT services
Code development approach Unpredictable rate of changes in code deployment Predictable changes in code and deployment at specified intervals with the support of SysOps professionals
Responsiveness to change Adaptive approach to code change Consistency in approach with de-risking factor in the event of new changes are introduced
Implementation of changes Changes are applied to code Changes are applied to servers
Value for business Value improvement for customers henceforth improvement in business Smooth functioning of system processes ensure improvement in value for organization
Infrastructure management approach Depends on usage of best automation tools Drive by focused attention on each server

Download the comparison table: DevOps vs SysOps

Conclusion

Every organization faces tough decision when it comes to choose between DevOps and SysOps so a clear understanding is required in terms of business need for speed of execution, significance of predictions and traffic rate determination for an application – highs and lows of traffic, businesses also need to know speed of scaling based on change in traffic, frequency of performing releases to the applications.

DevOps and SysOps are two major areas of cloud computing and both are used to manage infrastructure. If choice is to be made between the two than we need to look deeper into the requirements so as to build an application as under:

  • Load predictability estimation
  • Traffic trends (Highs and lows)
  • Clear idea of execution speed requirements
  • Rapid application change requirements
  • Rapid scaling requirements of applications
  • Business nature global or local
  • Frequency of application releases

Continue Reading:

DevOps vs NetOps

DevOps vs NetDevOps

]]>
https://networkinterview.com/devops-vs-sysops-understand-the-difference/feed/ 0 16531
Key Factors to Consider When Choosing Hybrid Cloud Providers https://networkinterview.com/choosing-hybrid-cloud-providers/ https://networkinterview.com/choosing-hybrid-cloud-providers/#respond Wed, 24 Jul 2024 18:14:49 +0000 https://networkinterview.com/?p=21189 Are you overwhelmed by the myriad of hybrid cloud computing providers available? Choosing the right one can significantly affect your business’s scalability and efficiency.

This blog will guide you through key factors to consider when selecting a hybrid cloud provider. We’ll cover aspects like reliability, security, and cost-efficiency. Understanding these elements can help you make an informed decision.

Stay tuned as we unravel what to look for in hybrid cloud providers. Get ready to boost your enterprise’s cloud strategy!

Factors to be Considered While Choosing a Hybrid Cloud Provider

Understanding Your Business Needs

It is very important to know exactly what your business needs before choosing a hybrid cloud provider. Looking at what your business needs now and in the future can help you figure out what features and services are necessary.

You can be sure that the provider you choose will help your business reach its goals and grow if you make these needs clear. For instance, think about whether your operations need high availability or if you need advanced data analytics tools.

Evaluating Security Measures

When picking a hybrid cloud provider, security should be the most important thing. Strong security measures keep private information safe and stop people from getting in without permission.

Check to see if the service provider offers things like encryption, managing your identity, and regular security updates. Also, check to see if they follow the rules and standards of your industry to make sure your data is handled correctly.

Integration Capabilities

Another important thing to think about is how to integrate with existing systems. A good hybrid cloud provider should be able to work with your current data, applications, and IT infrastructure without any problems.

This includes being able to work with different platforms and operating systems. Cloud solution integration tools can have a big effect on how quickly and easily you can switch to a hybrid cloud environment.

Related: Public vs Private vs Hybrid vs Community Clouds

Performance and Reliability

How well and how reliably a hybrid cloud provider works can have a direct effect on how your business runs. Check their service level agreements (SLAs) and uptime statistics to make sure they can meet your performance needs.

Reliable services lower the chance of downtime, which is important for keeping the business going and making sure customers are happy. Also, find out what their backup and disaster recovery options are.

Cost-Effectiveness

When choosing a hybrid cloud provider, cost is always an important thing to think about. It’s important to look at the pricing models and know-how costs are calculated.

Find a service provider that lets you choose your pricing plans and only charges you for the resources you use. Also, think about any extra costs that might come up, like support or data transfer fees.

Scalability Options

One great thing about a hybrid cloud environment is that it can be scaled up or down as needed. Make sure the provider you pick can change the amount of resources they offer based on your needs.

This skill is very important for adapting to changes in demand without affecting performance. Providers should offer automated scaling options to help manage resources in a way that saves time and money.

Support and Customer Service

It’s very important to know how much support and customer service a hybrid cloud provider offers. Having good customer service can help solve problems quickly and keep things running smoothly.

Check to see if the provider offers support 24 hours a day, seven days a week. Also, think about the different ways you can get help, like chat, email, or the phone. How quickly and well the support team can help is very important.

Compliance and Governance

Check to see if the hybrid cloud provider follows the governance and compliance standards that are important to your business. It is important to follow rules like GDPR and HIPAA for legal and business reasons.

The policies and procedures that providers offer should be clear and in line with your company’s compliance needs. Good governance makes sure that your data is managed and kept safe in a way that meets government requirements.

Data Management and Storage

A hybrid cloud solution needs to have good ways to manage and store data. Check the provider’s data storage options, such as their backup and redundancy plans.

Having good data management makes sure that your data is safe, easy to find, and handled quickly and correctly. You should also think about the storage options in terms of how scalable they are and how much they cost.

Network Infrastructure

The network infrastructure of a hybrid cloud provider is very important to how well and reliably services work. Look into the provider’s data centers, where they are located, and the ways you can connect to them.

Higher data transfer speeds and less latency can be achieved with a strong network infrastructure. In a distributed cloud environment, it is necessary to keep performance standards high.

Customization and Flexibility

With customization, you can make the hybrid cloud solution fit the needs of your business. A flexible provider should offer different configuration options so that the cloud environment can be changed to fit your needs.

This can include changing how resources are used, how security is set up, and how applications are deployed. Customization makes sure that the hybrid cloud solution works well with the goals of your business.

Innovation and Future-Proofing

Since technology changes quickly, it’s important to pick a hybrid cloud provider that is committed to new ideas. Find a service provider that is always putting money into new technologies and making things better.

This promise makes sure that your cloud resources are always up-to-date and ready to use. Focusing on innovation can also lead to new features that make your business run better and give it more chances to grow.

In selecting a hybrid cloud provider, it is imperative to consider these aspects carefully. The enterprise hybrid cloud solutions offer various advantages, but without properly assessing your hybrid computing options, you may face challenges later down the road.

Maximizing Business Potential through Hybrid Cloud Computing Providers

Choosing the right hybrid cloud computing providers is important for the growth and efficiency of your business in the future. These companies offer solutions that can be scaled up or down to fit different needs.

They make sure that security and compliance are strong, which keeps your data safe. With support that you can count on, they improve operational performance.

Plans that save money are also valuable. To make the most of your business’s potential and success, you should carefully consider hybrid cloud computing providers.

Did you like this guide? Great! Please browse our website for more!

]]>
https://networkinterview.com/choosing-hybrid-cloud-providers/feed/ 0 21189
9 Benefits of Cloud Infrastructure Management Services https://networkinterview.com/9-benefits-of-cloud-infrastructure-management/ https://networkinterview.com/9-benefits-of-cloud-infrastructure-management/#respond Sun, 18 Feb 2024 06:26:17 +0000 https://networkinterview.com/?p=20602 Cloud infrastructure has become an omnipresent force, exerting a profound impact on individuals worldwide, both directly and indirectly. Directly, individuals interact with cloud services in their daily lives, from storing personal files on platforms like Google Drive to streaming entertainment on platforms like Netflix. The convenience of accessing data, applications, and services from any device with an internet connection is a testament to the pervasive influence of cloud technology.

Indirectly, the far-reaching implications of cloud infrastructure extend to various sectors, influencing everything from healthcare to education. For instance, telemedicine relies on secure and scalable cloud solutions to provide remote healthcare services. Similarly, educational institutions leverage cloud platforms for seamless online learning experiences.

Moreover, businesses of all sizes benefit from the scalability and cost-effectiveness of cloud computing, enabling innovation and growth. As society continues to embrace the digital era, the cloud’s expansive footprint underscores its pivotal role in shaping the way individuals live, work, and connect with the world.

Effectively managing cloud infrastructure is crucial for unlocking the full potential of cloud computing. When handled adeptly, the cloud provides businesses with unparalleled flexibility and scalability for their applications and infrastructure, all while maintaining cost efficiency. This is achieved by allowing organizations and users to access virtual resources on a pay-as-you-go basis, minimizing the need to invest in and maintain physical infrastructure.

However, the promise of cost savings can be undermined without proper visibility, monitoring, and governance. An illustrative scenario involves an engineer inadvertently leaving a cloud development environment running continuously, even when it’s only required for a few hours of work. In a pay-as-you-go model, such oversights can result in escalating cloud costs, turning into runaway bills.

To avoid these pitfalls, businesses need to implement robust strategies for cloud infrastructure management. This entails establishing clear visibility into resource usage, implementing effective monitoring tools, and enforcing governance policies. By doing so, organizations can identify underutilized resources, optimize usage patterns, and prevent unnecessary expenses.

In essence, the key lies not only in leveraging the cloud’s inherent flexibility but also in maintaining vigilant oversight to ensure that cost-effective practices align with the dynamic nature of cloud computing. Through strategic management, businesses can harness the transformative potential of the cloud while keeping their financial commitments in check.

Explore The Benefits of Cloud Infrastructure

Cloud infrastructure management services offer a multitude of benefits that extend far beyond the primary advantages, unveiling unexplored and collateral benefits. These services, which involve overseeing the computing resources and services within a cloud environment, contribute to organizational efficiency, cost-effectiveness, and innovation.

  • Cost Optimization

The primary benefit of cloud infrastructure management services is cost optimization. By dynamically adjusting resources based on demand, organizations can avoid over-provisioning and only pay for the resources they consume. However, an often overlooked collateral benefit is the reduction in capital expenditure. With cloud services, businesses can shift from a capital-intensive model to an operational expenditure model, eliminating the need for substantial upfront investments in physical infrastructure.

  • Enhanced Security

Effective cloud management services bolster security by implementing robust access controls, encryption, and compliance measures. Beyond the obvious security boost, organizations can gain collateral benefits such as improved reputation and customer trust. Demonstrating a commitment to data security can enhance the brand image, attracting customers who prioritize the protection of their sensitive information.

  • Business Continuity and Disaster Recovery

Cloud infrastructure management facilitates efficient backup and disaster recovery solutions. Beyond the immediate benefits of minimizing downtime, the collateral advantage lies in increased resilience. Organizations gain the ability to adapt swiftly to unforeseen disruptions, ensuring continuous operations. This resilience can have a positive ripple effect on overall business stability and customer satisfaction.

  • Scalability and Flexibility

Cloud services allow businesses to scale up or down based on demand. The collateral benefit here is the ability to experiment and innovate without significant upfront investments. Start-ups and smaller enterprises, in particular, can leverage the scalability to test new ideas, fostering a culture of innovation that might not have been feasible with traditional infrastructure.

  • Improved Collaboration

Cloud infrastructure facilitates collaboration by providing a centralized platform for data storage and sharing. The unexplored benefit lies in the potential for enhanced employee productivity. Collaborative tools and real-time access to shared resources enable teams to work seamlessly, transcending geographical boundaries. This can lead to improved teamwork, creativity, and ultimately, business outcomes.

  • Agility and Time-to-Market

Efficient cloud management accelerates time-to-market for products and services. Collaterally, this agility enables organizations to respond swiftly to market changes and customer demands. The ability to deploy new features or applications rapidly can be a competitive advantage, allowing businesses to stay ahead in dynamic industries.

  • Environmental Sustainability

Cloud services contribute to environmental sustainability by optimizing resource utilization and energy efficiency. While the primary benefit is a reduced carbon footprint, the collateral advantage is positive publicity and stakeholder goodwill. Organizations demonstrating a commitment to eco-friendly practices can attract environmentally conscious customers and investors.

  • Global Reach

Cloud infrastructure allows organizations to expand their reach globally without the need for physical presence in every location. The collateral benefit is increased market access and the potential for international business growth. Companies can tap into new customer bases and diverse markets without the logistical challenges associated with traditional expansion.

  • Automation and Efficiency

Cloud management services often involve automation of routine tasks. The direct benefit is increased operational efficiency, but the collateral advantage is a boost in employee morale. Automation reduces mundane, repetitive tasks, allowing employees to focus on more strategic and fulfilling aspects of their roles, fostering job satisfaction and retention.

Conclusion

In a world where the cloud is the future, falling behind means risking relevance. Embracing cloud infrastructure management services isn’t just about staying current—it’s about seizing opportunities for growth, efficiency, and innovation. From cost savings to enhanced security and global reach, the benefits are too significant to ignore.

By harnessing the power of the cloud, businesses can position themselves at the forefront of digital transformation, ensuring they remain competitive and adaptable in an ever-changing landscape. So, let’s not settle for being left behind. Instead, let’s embrace the cloud and pave the way for a brighter, more connected future.

Continue Reading:

Career in Cyber Security or Cloud Computing: Which is better?

Top 10 Cloud Computing Trends

]]>
https://networkinterview.com/9-benefits-of-cloud-infrastructure-management/feed/ 0 20602
What Is Security Service Edge (SSE)? How is it different from SASE? https://networkinterview.com/security-service-edge/ https://networkinterview.com/security-service-edge/#respond Thu, 14 Dec 2023 07:50:15 +0000 https://networkinterview.com/?p=18676 Introduction to SSE & SASE

Security and network architecture have taken a front seat since cloud adoption is all time high and constantly growing. The demand for remote workforce is increasing and per Gartner research demand for remote working is set to increase 30% by 2030, this is given more momentum with the coronavirus pandemic which has forced world wide organizations to adopt a hybrid working model. 

Need for distributed working is however much older and not just evolved in the last 24 months during the pandemic times. In the 1990s and 2000s there was a simple centralized architecture where data resided in data centers and connectivity to branch offices and simple security measures were set in. The majority of staff worked from the office so it was easy to provide secure access to resources and services. 

Today we look more in detail about two most popular terminologies emerged in cloud era – SSE (Security service edge) and SASE (Secure access service edge), lets understand how they are interlinked but at the same time they are different , their advantages and limitations and off course the use cases. 

What is Security Service Edge or SSE?

SSE term was also introduced by Gartner in the year 2021 emerged as a single vendor, cloud centric converged solution to accelerate digital transformation with enterprise level security to access web, cloud services, software as a service, private applications with capability to accommodate performance demands and growth. 

It may be included as a hybrid of on premises and agent-based components but primarily it is a cloud-based service. It offers capabilities such as access control, threat protection, data security, security monitoring, acceptable use controls enforced via network based and API based integrations. 

SSE security services include Cloud access security broker (CASB), Secure web gateway (SWG), Zero trust network access (ZTNA), Data loss prevention (DLP), Remote browser installation (RBI) and Firewall as a service (FaaS). 

What is SASE?

Term Security Service Edge or SASE was coined by Gartner in 2019 to describe offering a range of security networking products. It is a complex product with five elements into it and with the inclusion of SD-WAN meant on premises equipment which makes setup more complicated and pricing model needs to cover cost of hardware also. 

Some of the major vendors in SASE space are Cato networks, Fortinet, Palo Alto networks, Versa, VMware and others. It brought two prolonged vendor approaches together, having a highly converged wide area network (WAN) and Edge infrastructure platform with highly converged security platform – Security service edge (SSE). 

It is a security component of SASE which unifies all security services including secure web gateway (SWG) , cloud access security broker (CASB) and Zero trust network access (ZTNA) to provide secure access to web, cloud services, and applications. 

Comparison: SSE vs SASE

The key points of differences between the two are:

Term Coined

Gartner gave SSE term in year 2021 to define limited scope of network security convergence including SWG, CASB, DLP, FaaS, ZTNA into a single cloud native service. On the other hand, Gartner gave SASE term in year 2019 to define convergence of networking and security capabilities into a single cloud native service.

Concept

SSE is a component of SASE (a security pillar). SASE has broader approach and takes a holistic approach towards secure and optimized access . The focus is both on user experience and security.

Requirements

To use capabilities of SSE we need physical hardware to deploy services at locations. SASE = security service edge (SSE) + access, it is an architecture that organizations endeavour to have involving delivering networking and security via cloud directly to end user instead of a physical conventional data center.

Vendors

Some of the examples of important vendors of SSE are Z-scaler, Cisco, Palo Alto, Netskope, Cato networks. Whereas, Z-scaler, Palo Alto, McAfee, Cisco, Nokia, Fortinet, Versa Networks, VMware are the important vendors that provide SASE.

Below table summarizes the differential points between the two:

SSE VS SASE

Download the comparison table: SSE vs SASE

Continue Reading:

CASB vs SASE: Which One Is Better?

CSPM vs CASB: Detailed Comparison

]]>
https://networkinterview.com/security-service-edge/feed/ 0 18676
What is VPS (Virtual Private Server)? https://networkinterview.com/what-is-vps-virtual-private-server/ https://networkinterview.com/what-is-vps-virtual-private-server/#respond Fri, 13 Oct 2023 15:05:04 +0000 https://networkinterview.com/?p=15825 Introduction to VPS (Virtual Private Server)

In the IT community, we define with the term VPS (Virtual Private Server), any virtual machine that is sold as a service by an Internet Hosting Service. In normal conditions, a virtual private server usually runs its own copy of an operating system (OS) and customers have full administration access to that operating system instance. Meaning that they can install almost any kind of software.

Regarding many kind of purposes, the VPS is functionally equivalent to any dedicated physical server and it is being software customized, meaning that it can be easily created and configured. On the other hand, a virtual server costs much less money than an equivalent physical server. However, as virtual servers share the underlying physical hardware with other Virtual Private Servers performance is usually slower and depends on the workload of any other executing virtual machines on the network.

There are a number of VPS Hosting service providers available. For example: AccuWeb Hosting provides one of the best affordable VPS Hosting services.

Virtual Private Server Advantages & Features

Nowadays, in the IT industry many enterprises are using Virtual Private Servers for many reasons. The most advanced features and advantages of VPN technology are addressed below:

Virtualization:

The most advanced technology that the VPS offers is based on the force of driving server virtualization. Although in most virtualization techniques, the resources are shared, as they are under the time-sharing model. Virtualization provides a higher level of security which depends on the type of virtualization used. Most of the individual virtual servers are mostly isolated from each other and may run their own operating system which can be independently rebooted as a virtual instance.

The technique used for partitioning single servers in order to appear as multiple servers is very common on microcomputers since the launch of VMware ESX Server in 2001. The common features include a physical server that typically runs a hypervisor which is releasing and managing the resources of what we call “guest” operating systems or virtual machines.

In addition, these “guest” operating systems are allocated to share the resources of the physical server. As the VPS runs its own copy of its operating system, users have high administrative level access to that operating system instance and can install any kind software that runs on the OS.

Motivation:

In addition, VPS is used to decrease hardware costs by trimming a failover cluster to a single machine, therefore decreasing costs dramatically while it provides the same services. As a general rule the common server roles and features are generally designed to operate isolated in most system architectures. Like Windows Server 2019 OS requires a certificate authority and a domain controller to exist on independent servers.

This happens because the additional roles and features increases areas of potential failure as well as adding visible security risks. This procedure directly motivates needs for virtual private servers in order to retain conflicting server roles and features on a single hosting machine. Also, the occurrence of virtual machine encrypted networks decreases most of the passing through risks that might have discouraged the VPS usage as a legitimate hosting server.

Hosting:

Finally, VPS is used from many companies in order to provide virtual private server hosting or virtual dedicated server hosting as an advanced alternative solution for web hosting services. These services have several challenges to be considered such as licensing the proprietary software in a multi-tenant virtual environment.

They are categorized to “Unmanaged” or “Self-Managed” hosting. This includes the user to administer his own server instance or on other hand, “Unmetered” hosting, which is generally provided with no limit on the amount of data transferred on a fixed bandwidth line.

In general, in a virtual private server, bandwidth will be shared and a fair usage policy should be involved.

Conclusion 

We explained in this article that VPS is one of the most advanced ways to keep up the success of any web-site security and integrity. It’s also the best plan that can provide scalability for enterprises and large organizations. With VPS, not only the user enjoys a tremendous amount of storage and bandwidth but it’s also a cost-effective solution to meet the demands of a busy site. Hopefully in the future, new technologies will be invented for manipulating hardware resources more efficiently.

Continue Reading:

Public vs Private vs Hybrid vs Community Clouds

What is Multi Cloud Network Architecture?

]]>
https://networkinterview.com/what-is-vps-virtual-private-server/feed/ 0 15825
How to fix VMWare ESXi Virtual Machine ‘Invalid Status’ https://networkinterview.com/vmware-esxi-virtual-machine-invalid-status/ https://networkinterview.com/vmware-esxi-virtual-machine-invalid-status/#respond Sat, 25 Feb 2023 18:08:46 +0000 https://networkinterview.com/?p=19223 Troubleshooting VMWare ESXi Virtual Machine ‘Invalid Status’

Let’s troubleshoot VM Invalid status 

You can see multiple “invalid” VM machines in the image below. Here status is showing invalid.

Reason of Invalid VM Machine status could be related to the storage of underlying machine has been moved or changed, or corrupted, deleted and it moved to another storage device and as a result of which VMware ESXi hosts no longer knows what it is and consider VM Machines as invalid.

You need to delete the invalid VM Machines and add it manually if the machine does exist.

Please consider below points before deleting any VMware Machine 

  • Check .vmx file for configuration of the host. It should be accessible to replicate the new VM after deleting invalid host.
  • Check if .vmx file is in unlock state
  • Check VM tools for installation like SSH/putty 

There -> Navigator -> Virtual Machines -> Select VM

Click on Action -> Right Click the Action Tab -> It will give you so many options to allow, delete, and unregister you the VMware Machine.

You can select unregister Tab to remove the device from here. However if you find the options in greyed-out colour then you need to unregister the devices from SSH access.

First you need to enable SSH for VMware ESXi machines and then connect to the machines by using a putty session.

Go to Manage -> Services -> TSM-SSH -> SSH -> Action -> Select Start

And apply a running option to enable the SSH application for the host.

Login to Putty session from Windows Machine. 

Make sure you can login as a root user.

Once you login into putty session type below command to provide the overview what is running in the ESXi host

# vim-cmd /vmsvc/getallvms

You can see that the output of the command can show you the list of VM IDs. You can pick the list of VM IDs which you want to remove from the VM host.

Now further you can check the list of VM IDs with invalid status along with ID number.

Case 1: Reload VM to recover from invalid state

Here first, we will try to recover the host by reloading the configuration. We can try to reload the VM as to rectify the issue but if it fails then we have to unregister the VM (case-2)

# vmsvc/reload <VM id>

 

Case 2: Unregister VM Host

Now we need to unregister the above invalid VM IDs from CLI by running below command followed by VM ID number

#vim –cm /vmsvc/unregister <VM id>

Further you can cross verify the removal of VM IDs from the Web GUI of host as well.

You can reconfigure the VM hosts once removing the VM IDs.

Thanks for reading!!!

Continue Reading:

Hyper V vs VMware : Detailed Comparison

What is VMware Horizon?

]]>
https://networkinterview.com/vmware-esxi-virtual-machine-invalid-status/feed/ 0 19223
Top 10 Software as a Service (SaaS) Companies https://networkinterview.com/software-as-a-service-saas-companies/ https://networkinterview.com/software-as-a-service-saas-companies/#respond Mon, 07 Nov 2022 15:53:37 +0000 https://networkinterview.com/?p=18641 Software as a Service (SaaS) is an umbrella term for cloud-based services delivered to customers over the internet. The software operates on a subscription model, helping businesses reduce capital expenditure and operational expenses by shifting from upfront payments for software licenses to regular subscriptions.

SaaS providers host their applications online so that customers can access them from anywhere at any time. These providers charge users regularly instead of charging for each software license upfront. We have compiled this list of the top SaaS companies to help you know them better and expand your knowledge about the industry.

Many people consider these companies as leaders in this segment because of their large market share, a high number of users and a high number of partnerships with other players in the same field.

Before going to the list of SaaS providers, lets enlist and understand the benefits of SaaS.

What are the benefits of SaaS?

SaaS applications are hosted remotely, which allows businesses to use a single application for multiple purposes.

This can reduce the number of applications you have to manage, and it can be easier to access compared to installing software on your servers. There are also several other benefits to choosing SaaS, including:

  • Business continuity – Businesses will never lose access to data if one location experiences a power outage or natural disaster.
  • Security – Hosted applications are better protected from security breaches because data is stored on remote servers.
  • Scalability – You can add more employees or off-hours workers as needed, and you can also remove access as needed.
  • Lower risk – SaaS providers typically offer service-level agreements in case something goes wrong.
  • Ease of transition – If you transition to a new employer or if your company acquires another company, you can continue to use the same applications.
  • Support – SaaS providers offer customer support over the phone, email, or online chat.
  • Subscription cost – SaaS providers typically charge a monthly fee rather than requiring a large upfront investment.

List of Top 10 Software as a Service (SaaS) Companies 

Salesforce

With revenue of $11 billion, Salesforce is the world’s largest SaaS company. It is an enterprise cloud computing company that sells a suite of software products with the most prominent being Sales Cloud, Service Cloud, Marketing Cloud, and SalesforceIQ. These services are aimed at improving the way businesses and organizations manage their customers, sales, and business processes.

Salesforce’s customer relationship management (CRM) and business process management (BPM) software provide organizational insight that helps companies increase productivity through streamlined tasks and real-time analytics. Salesforce was founded in 1999 by former Oracle product managers Marc Benioff and Parker Harris.

Microsoft

Microsoft is one of the best SaaS companies and is a pioneer in the application-as-a-service model. It is a multinational computer software corporation that is headquartered in Redmond, Washington. Microsoft’s fiscal year is from July 1 to June 30 and the company has a market cap of $820.8 billion and is run by Satya Nadella. Microsoft offers a wide range of products and services for individuals and businesses. It operates in three segments:

  • Productivity and Business Processes,
  • Intelligent Cloud, and
  • More Personal Computing.

The company’s portfolio of products and services includes Operating Systems, Security, Developer Tools, Business Process Tools, Office products, Gaming, Consumer and Office services, Productivity services, Microsoft Azure services, and Other products and services.

HUBSPOT

HubSpot is a marketing and sales software SaaS provider with a focus on inbound marketing. It offers a marketing automation tool, a CRM, an analytics dashboard, and a sales automation tool. The company also provides training programs and certifications to help organizations adopt its tools.

HubSpot was founded in 2010 by Dharmesh Shah and Jared Newman. It is headquartered in Cambridge, Massachusetts. HubSpot has raised $664 million in funding and has more than 17,000 customers.

Adobe

Adobe is a multinational software company that provides services in the areas of digital marketing and creative software. Its services include web and mobile applications, advertising and marketing services, video and audio content, and software and security solutions. Whether you want to learn how to make a profile picture to build your brand, create stunning visual content, or manage digital marketing campaigns, Adobe provides a suite of tools and resources to support your creative and professional goals. The company was founded in February 1982 and is headquartered in San Jose, California, with facilities around the world.

Adobe has more than 50,000 customers in over 90 countries, including 85 of the Fortune 100 companies. Some of Adobe’s best-known products and services include Adobe Digital Marketing, Creative Cloud, Adobe Experience Manager, Adobe Analytics, Adobe Sign, and more.

Google

Google is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search, cloud computing, artificial intelligence, and machine learning. Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University. Together, they own 16.3% of their shares and control 56.7% of the stockholder voting power through special voting shares.

Google is the world’s largest Internet corporation, as well as the world’s largest Internet search engine. The company also analyzes Tiktok user data and offers other services involving the Internet. Google is the parent company of several Internet-based services and products, including

  • the search engine,
  • the advertising service AdWords,
  • the cloud service Google Cloud,
  • the online knowledge market Google Search,
  • the online email service Gmail,
  • the online video sharing service YouTube,
  • the online translation service Google Translate,
  • the online map service Google Maps,
  • the online shopping service Google Shopping,
  • the online office suite Google Docs, and
  • the social network Google+

Slack

Slack is an online workspace where teams can communicate, collaborate, and complete their work. It offers messaging, file hosting, and video conferencing tools. Slack was founded in 2013 and has a current valuation of $15 billion. It is headquartered in San Francisco, California. Slack has over 10 million daily users and has raised over $800 million from investors, including Sequoia Capital, GGV Capital, Kleiner Perkins, and Thrive Capital.

Slack’s first product was an internal communications tool called Tiny Speck developed for Atlassian’s office in Sydney, Australia. In March 2013, the company launched a beta version of Tiny Speck to the public. In September 2013, the company raised $42 million in a Series B round of funding led by Greylock, including investors Marc Andreessen, Kleiner Perkins, and Sequoia Capital.

Freshworks

Freshworks is a cloud-based software SaaS company that makes products for sales and service organizations. Freshworks offers a suite of products, including Freshdesk, Freshservice, Freshcaller, Freshmeet, Freshmail, and Freshsales. Freshworks was founded in 2011 by Subramanian Venkat and Ramanand Chandrasekaran.

The company has raised more than $205 million in funding from investors including Sequoia Capital, Accel Partners, and Sands Capital. Freshworks has over 8,000 customers in more than 50 countries, including Adidas, Tesla, and Adobe.

Freshworks was previously named Freshdesk, an online help desk software. The company changed its name to Freshworks in 2018. Freshworks has more than 500 employees, with offices in San Francisco, Chennai, Sydney, and Hyderabad.

ServiceNow

ServiceNow is an enterprise SaaS company that provides cloud-based IT service management and IT operations management software. The company’s products include ServiceNow Service Automation, ServiceNow Config Automation, ServiceNow IT Operations, ServiceNow IT Automation. ServiceNow was founded in 2003 by Fred Gillo, Vikram Krishnan, and John Morules.

The company has raised $1.37 billion from investors including Southeastern Asset Management, Blackrock, Capital Group Companies, and GIC. ServiceNow was the first cloud-based enterprise IT operations management software and IT service management platform that could be accessed from any device.

The company also offers a hybrid cloud hosting model and a multi-tenant model. ServiceNow has over 6,400 customers, including Avis, Hewlett Packard Enterprise, HP Inc., Hitachi, Intel, Qualcomm, and VMware.

Atlassian

Atlassian is an Australian software company that provides collaboration tools for software developers and project managers. It offers a suite of software products including Jira (project management), Confluence (knowledge management), BitBucket (source code management), HipChat (team communication), and Stride (workplace chat). The company’s products are used by software teams to track issues, assign work, and collaborate.

Atlassian was founded in 2002 by Mike Cannon-Brookes and Scott Farquhar. The company has raised $3.3 billion from investors including Accel Partners, BlackRock, Google Capital, Kleiner Perkins, and TPG. Atlassian’s products are used by more than 100,000 organizations, including 84% of the Fortune 100 and 75% of the Fortune 500. The company has over 8,000 customers, including Adobe, Amazon, Cisco, eBay, NASA, Netflix, Spotify, Tesla, and Etsy.

Dropbox

Dropbox is a file hosting service that enables users to share and store files online. It is a cloud computing service that allows users to store files online and access them from a web browser or mobile device. Dropbox was founded in 2007 by Drew Houston and Arash Ferdowsi.

The company is valued at $10 billion. Dropbox has raised $439 million from investors including Sequoia Capital, Accel Partners, Technology Crossover Ventures, and Dragoneer Investment Group. Dropbox has over 500 million users and is headquartered in San Francisco, California.

Continue Reading:

How to make a career as a SaaS Developer?

5 Top Cloud Service Providers

]]>
https://networkinterview.com/software-as-a-service-saas-companies/feed/ 0 18641
CASB vs Proxy: Understand the difference https://networkinterview.com/casb-vs-proxy-understand-the-difference/ https://networkinterview.com/casb-vs-proxy-understand-the-difference/#respond Wed, 26 Oct 2022 11:05:30 +0000 https://networkinterview.com/?p=17453 A common question arises in the mind of IT focals related to Cloud access security broker (CASB) service products such as we already have a web proxy firewall then how is this different? Is CASB a replacement for web proxy/ firewall? As web proxies and firewalls have visibility into all traffic over the organization network which also includes traffic from cloud services. However, there is a significant difference between the two and CASB is not a replacement for web proxy or firewall. 

Today we look more in detail about cloud access security broker (CASB) and web proxy, what are the significant differences between both products, its advantages etc.

About CASB

Cloud access security broker (CASB) is a cloud hosted solution placed between cloud service customers and cloud service providers to implement security, compliance and governance controls and security policies for cloud-based applications and help to extend security controls of infrastructure over the cloud. There are four pillars of CASB namely visibility, compliance, data security and threat detection. 

  • Visibility – into user activity over cloud applications such as who uses which cloud application, their departments , locations and devices being used
  • Compliance – identify sensitive data in cloud and enforce DLP policies to ensure data compliance objectives are met
  • Data security – implements data security such as encryption, tokenization and access control inclusive of information rights management
  • Threat protection – detect and respond to malicious threats, privilege user threats and accounts compromise 

Features of CASB

  • Identification of malware attack and its prevention from entering it into organization network
  • User authentication checking of credentials and ensures access to granted to only appropriate resource
  • Web application firewalls (WAF) designed to breach security at application level instead of network level 
  • Data loss prevention to ensure users cannot transmit organization confidential information or intellectual property outside its boundaries
  • Provides detailed, independent risk assessment for each cloud services 
  • Enforces risk-based policies 
  • Controls user access based on context 
  • Apply machine learning to detect threats 

Use cases for CASB

  • Discovery of cloud application and risk rating assignment
  • Adaptive access control
  • Data loss prevention
  • Behaviour analytics for users and entities
  • Threat protection
  • Client facing encryption
  • Pre-cloud encryption and tokenization
  • Bring your own key management
  • Monitoring and log management
  • Cloud security posture management 

How does CASB work?

CASB working involves securing data flow to and from cloud environments by implementation of organization security policies and protection against cyber attacks, malware prevention and provide data security with encryption and make data streams non-interceptive for hackers. It uses auto discovery to locate cloud applications in use and identify high risk applications, users and key risk factors. 

CASB can be deployed in forward or reverse proxy mode to enforce online controls however similarities stop here with web proxy. CASB is focused on deep visibility and granular controls for usage of cloud. it can be deployed in API mode to scan data at rest in cloud services and enforce policies across cloud applications data. 

About Proxy 

Web proxies offer broad protection against network threats, and offers limited visibility into cloud usage, without any integration to CASB it tracks cloud access over corporate networks. Some customers use network security solutions to terminate SSL and inspect content for malware, proxies and firewalls and bucket cloud services into high level categories which usually do not cover the underlying function of the services such as CRM, file share or social media. Usually, web proxies redirect URL access requests to an alternate web page hosting a notification that URL is blocked (millions of illicit sites containing pornography, drugs, gambling etc.).

Firewalls can be configured to block traffic from a specific IP address in the same manner. But it lacks detailed and up to date cloud registries with cloud service URLs and IP addresses to extend this access control functionality over to cloud services. As routinely new URLs are introduced by cloud service providers and IP that are not blocked which leads to ‘proxy leakage’ in which employees may have access to websites which IT does not want them to visit. CASB works as complementary technology to web proxies and can leverage existing network infrastructure to gain visibility into cloud usage. 

Web proxies capture data about cloud usage occurring over the network but cannot differentiate between cloud usage and internet usage. 

Comparison Table: CASB vs Proxy

Below table summarizes the difference between the two:

FUNCTION

CASB

PROXY

Log Collection CASB solutions detect which users are using which cloud services by ingestion of log files Capture cloud usage over network but can’t differentiate between internet usage and cloud usage
Packet Capture CASB ingests part of traffic into existing network solution and gain visibility into data in packet capture Usually inspects web traffic and block URLs based on policies and redirects user to a web page indicating URL is blocked due to varied reasons such as gambling ,illegitimate sites etc.
Access Access over browser, mobile applications and desktop apps, sync client Browser only
Use Cases Quarantine sensitive data and malware, encrypt sensitive data at rest , remove public shares of sensitive data Stop malware

Encrypt sensitive data in real time

Govern on or off managed network devices

Products MacAfee MVISION, Microsoft defender, Symantec CloudSOC etc. SmartProxy, Brightdata, Oxylabs etc.

Download the comparison Table: CASB vs Proxy

Continue Reading:

CSPM vs CASB: Detailed Comparison

Top 13 CASB Solutions

]]>
https://networkinterview.com/casb-vs-proxy-understand-the-difference/feed/ 0 17453
Top 5 Type 1 Hypervisors in Market https://networkinterview.com/top-5-type-1-hypervisors-in-market/ https://networkinterview.com/top-5-type-1-hypervisors-in-market/#respond Fri, 19 Aug 2022 06:40:39 +0000 https://networkinterview.com/?p=12846 Types of Hypervisors

Virtual Machine Monitor or VMM also called as Hypervisor is a technology that separates software (computer operating system) from hardware. With a hypervisor, a host computer can support and accommodate many other virtual machines by sharing its processing and memory. Hypervisors can also be created for mobiles. The basic concept is to maximize the use of computer resources like CPU cycles, memory, and network bandwidth.

Hypervisors allow every guest (virtual machines) to access the host computer’s CPU and memory; it also limits the portion of resources each VM can make use of so that other VM’s can also run on a single system easily.

Talking about the types, hypervisors are distinguished into two:

Type 1 Hypervisors: Native

Also known as “bare-metal” hypervisors which lie between guest operating systems and hardware, these directly run on the hardware of the host computer. It also helps manage guest virtual machines. Type 1 hypervisors interacts with the memory, CPU of the host. Direct access makes it an efficient choice, also increasing the security as there is nothing between the CPU and itself that can be put on stake. But, it does need a separate machine to manage host hardware and different VMs.

Type 2 Hypervisors: Hosted

This kind of hypervisor works as another software on the computer. It needs to be lodged or installed in the computer. It differs with the native hypervisors in performance. Type 2 hypervisor provides better connection between the host operating system and the guest virtual machine allowing users to open and exit as required and enabling them to access host OS folders and files from the VM. There lies a potential risk of security because guest OS can be manipulated if the host OS gets compromised.

 

List of Top 5 Type 1 Hypervisors in Market

VMware vSphere/ ESXi:

Donning the leader’s hat is the VMware available in 5 commercial versions and one version free of cost. The product is vSphere/ESXi. Earlier the free version was called “Free ESXi” which is directly loaded on the server. It provides features like svMotion, vMotion, or centralized management. Free version supports up to 32 GB RAM per server. There are also some low cost offerings making it affordable for small scale infrastructures.

Microsoft Hyper-V:

Earlier released under Microsoft, Hyper-V is available both commercially and free of cost. There are 4 commercial editions-

  • Foundation
  • Essentials
  • Standard
  • Datacenter Hyper-V

It provides features like storage migration, VM replica, dynamic memory and many more. Along with vSphere and XenServer, Hyper-V falls in the top 3 range of type 1 hypervisors.

Related – Hyper-V vs VMware

Citrix XenServer:

XenServer is a commercial solution provided by Citrix present in 4 editions. XenSource, Inc. was purchased by Citrix in 2007. Now, Xen projects are available at Xen.org. Features offered are power management, memory optimization, monitoring and alerting, conversion tools, live storage migration etc. XenServer started as an open source project and today it has labelled their proprietary solutions namely XenDesktop and XenApp with the name of Xen.

Red Hat Enterprise Virtualization (RHEV):

With features like live migration, image management, templating, power saving and cluster maintenance, RHEV is a commercial version of type 1 hypervisor. Built on Kernel-based Virtual Machine (KVM), it benefits users as an easy to set up, use and manage alternative. An open source hypervisor, Red Hat Enterprise is made in such a way that it can work with anything but it is also tested on many hardware and servers. RHEV is an affordable solution as the total cost of owning it is low while performance is outstanding.

KVM (Kernel-based Virtual Machine):

A type-1 hypervisor based on Linux is an open source hypervisor. KVM can run on Linux operating systems like SUSE, Ubuntu and Red Hat Enterprise Linux. Apart from these, Windows and Solaris are some Linux operating systems supported. With KVM, Linux turns into a hypervisor that enables host computer to run and support several other virtual machines or guests. Every guest machine runs as common Linux operating system with hardware like graphics adapter, memory, network card, CPUs and disks.

This is how virtual servers, groups and users can be managed and monitored from a unified dashboard.

Also refer: Type-1 vs Type-2 Hypervisors

]]>
https://networkinterview.com/top-5-type-1-hypervisors-in-market/feed/ 0 12846
Hypervisor in Cloud Computing https://networkinterview.com/hypervisor-in-cloud-computing/ https://networkinterview.com/hypervisor-in-cloud-computing/#respond Thu, 18 Aug 2022 14:15:17 +0000 https://networkinterview.com/?p=14017 Hypervisor in Cloud Computing:

For long, Applications were assigned dedicated physical servers and resources like CPU, memory and storage. Growth and demand in the ever changing IT environment stipulated for a cost effective and energy saving solution. Virtualization technology gained acceptance in the IT world and henceforth Hypervisors gained global preference. Hypervisor technology is not as young, infact it was introduced by IBM in the 1960s for the purpose of its mainframe computers. Till date, hypervisor technology has developed considerably such that a single mainframe can handle hundreds and thousands of VMs.

Hypervisor is the key ingredient of virtualization which is responsible for sharing of physical hardware resources for different application and minimizing dependencies of application on physical machine. In its simpler form, the hypervisor is specialized firmware or software, or both, installed on single hardware that would allow you to host several virtual machines. It permit physical hardware to be shared across several VMs. A computer on which hypervisor runs one or more virtual machines is called a host machine. The VM is known as a guest machine. The hypervisor allows the physical host machine to run various guest machines. Hypervisor computes resources such as memory, storage, network bandwidth, and CPU cycles. Infact, these resources are considered by hypervisor as a pool which can be reallocated between existing guests or to new VMs.

Types of Hypervisor in Cloud Computing

  • Type I Hypervisor
  • Type II Hypervisor

Related – Type-1 vs Type-2 Hypervisors

Type I Hypervisor in Cloud Computing

Type I is the bare-metal hypervisor that is deployed directly over the host’s system hardware without any underlying OS or software. Usually, they don’t require the installation of software ahead of time. It can be install onto the hardware. This type of hypervisor tends to be powerful and requires a great deal of expertise to function it well. Type I hypervisors are more complex.  It has certain hardware requirements to run adequately. Due to this, it is mostly chosen by IT operations and data center computing. Type 1 hypervisors vendors are

  • Microsoft Hyper-V hypervisor
  • Oracle VM
  • VMware ESXi
  • Citrix XenServer.

Type II Hypervisor in Cloud Computing

Type II is a hosted hypervisor that runs as a software layer within a physical operating system. The hypervisor runs as a separate second layer over the hardware while the OS runs as a third layer. Type two is not much efficient to handle complex virtual tasks. It can be used for basic development testing and emulation purpose. If there is any security flaw found inside the host OS, it can potentially compromise all of virtual machines running. That is why type II hypervisors cannot be used for Data Center computing. They are designed for end-user systems where security is less of a concern. For instance, developers could use type II Hypervisor to launch virtual machines in order to test software product before their release. Type II hypervisor vendors are –

  • Parallels Desktop
  • Windows Virtual PC
  • VMware Workstation Pro/VMware Fusion
  • Oracle VM
  • Virtual Box
  • VMware Player.

Advantages of Hypervisor in Cloud Computing

  • Though virtual machines operate on the same physical hardware, they are separated from each other. This also depicts that if one virtual machine undergoes a crash, error, or a malware attack, it doesn’t affect the other virtual machines.
  • Another benefit is that virtual machines are mobile and portable as they don’t depend on the underlying hardware. Since they are not linked to physical hardware, switching between local or remote virtualized servers gets a lot easier as compared to traditional applications.

 

Summary

When you achieve virtualization, it brings a merger of multiple resources. This tends to reduce costs and improves manageability. In addition to it, a hypervisor can manage increased workloads. In a situation when a specific hardware node gets overheated, you can easily switch those virtual machines onto some other physical nodes. Virtualization also delivers other benefits of security, debugging and support. A Hypervisor is a natural target for hackers because its design controls all the resources of the hardware while managing all the virtual machines residing on it.

Related – Top 5 Type-1 Hypervisors in Market

Watch Related Video

 

]]>
https://networkinterview.com/hypervisor-in-cloud-computing/feed/ 0 14017
Hyper V vs VMware : Detailed Comparison https://networkinterview.com/hyper-v-vs-vmware-detailed-comparison/ https://networkinterview.com/hyper-v-vs-vmware-detailed-comparison/#respond Wed, 17 Aug 2022 11:42:35 +0000 https://networkinterview.com/?p=14067 Hyper V vs VMware

In computing, virtualization refers to the act of engendering a virtual version of one thing, as well as virtual constituent platforms, storage contrivances, and electronic network resources. On a very broad level, there are 3 styles of Server Virtualization –

  • Full virtualization
  • Para-virtualization
  • OS-level virtualization.

A hypervisor, conjointly referred to as a virtual machine monitor, is a method that makes and runs virtual machines (VMs). A hypervisor permits one host laptop to support multiple guest VMs by nearly sharing its resources, like memory and process.

Generally, there are a unit 2 kinds of hypervisors. Type-1 hypervisors, which is also known as “bare metal,” runs directly on the host’s hardware. Type-2 hypervisors, also known as “hosted,” run as a package layer, like alternative laptop programs.

 What is Hyper V?

Hyper-V functions by running each VM in their respective isolated space, while leveraging the same hardware. Each Virtual Machine in this scenario may have their own Operating system independent of other VMs. Infact, such logical partitioning by virtualization helps keep the issues like crashing etc local to the VM while other workloads can run independently.

Related – Hypervisor in cloud computing

Editions of Hyper V are as below:

  • Windows Server Data center
  • Windows Server Standards
  • Windows Server Essentials

Supported OS by Hyper-V:

  • CentOS
  • Red Hat Enterprise Linux
  • Debian
  • Oracle Linux
  • SUSE
  • Ubuntu
  • FreeBSD

What Is VMware?

VMware is a company that provides platform for virtualization. Vmware was launched in year 1998 in Palo Alto, California. The first virtualization software was VMware Workstation. In year 2001, VMWare GSX Server and VMWare ESX server were introduced in the market. It is notable that majority of the VMware virtualization software programs are for business use.

VMware vSphere is a server virtualization platform created by VMware. Basically, vSphere encompasses a collection of virtualization products that hold the ESXi hypervisor, vSphere shopper, VMware digital computer, vCenter, and others.

Editions of VMware are:

  • VMware vSphere Standard
  • VMware vSphere Enterprise Plus
  • VMware vSphere Operations Management Enterprise Plus
  • VMware vSphere Platinum

Below are the Supported OS by VMware:

  • Oracle Unbreakable Enterprise Kernel Release 3 Quarterly Update 3
  • Asianux 4 SP4
  • Solaris 11.2
  • Ubuntu 12.04.5
  • Ubuntu 14.04.1
  • Oracle Linux 7
  • FreeBSD 9.3
  • OS X 10.10.

Hyper V vs VMware:

Licensing support of Hyper V and VMware:

  • Physical CPU support available in Hyper V but limited in VMware.
  • OSE license support is available in both.
  • Windows server VM license support is available per host.
  • Antivirus and malware protection is supported by both.
  • A web-based management console support is available in both.

Storage Capabilities of Hyper V and VMware:

  • ISCSI/FC support available in both.
  • Network file system support available in both.
  • Virtual fiber channel support available in both.
  • 3rd Party multipathing is available in both.
  • Storage tearing and Virtualization is available in both.

Network Capabilities of Hyper V and VMware:

  • IPsec task offload is available in Hyper V but not in VMware.
  • Virtual receive side scaling is available in both.
  • SR-IOV with live migration is supported in Hyper-V but not in VMware.
  • Dynamic Virtual Machine queue is available in both.

Technical feature comparison (Hyper-V on Windows Server 2016 vs Vmware vSphere 6.7)

  • System Logical CPU
    • Hyper-V on Win Server 2016 supports 512
    • Vmware vSphere 6.7 supports 768.
  • System Physical RAM
    • 24 TB for Hyper-V
    • 16 TB for Vmware vSphere.
  • Virtual CPUs and VM per host
    • 2048 and 1024 respectively for Hyper-V
    • 4096 and 1024 respectively for Vmware vSphere
  • Virtual CPUs per VM
    • In case of Hyper-V (Win server 2016)
      • 240 for Gen2 VMs
      • 64 for Gen1 VMs
      • 320 for host OS
    • In case of Vmware vSphere 6.7
      • 128
  • Memory per VM
    • For Hyper-V, its 12 TB for Gen2 VM and 1 TB for Gen1 VM
    • In case of vSphere, it is 6128 GB
  • Maximum Virtual Disk Size
    • For Hyper-V, its 64 TB (VHDX) and 2040 GB (VHD)
    • In case of vSphere, it is 62 TB
  • Number of Virtual SCSI disks
    • Both support 256 SCSI disks
  • Maximum number of VMs per cluster
    • Both support 8000 VMs per cluster

 Some PROS and CONS of Hyper V:

PROS OF HYPER V

CONS OF HYPER V

Minimal device driver management. A crash of the primary OS will crash all VMs.
A wide range of compatible devices. OS must be installed in order for the Hypervisor Layer to operate.
New server roles are easy to install. Frequent OS and security updates translate into more overhead.
High resilience to corrupt external code. Lack of support for service templates.
Shorter initialization time.
Zero downtime to perform maintenance or apply security updates.
Readily scalable services.

Some PROS and CONS of VMware:

PROS OF VMware

CONS OF VMware

No OS is required for controlling the management components. Vendor support unavailable in issue with Incompatibility of hardware.
No additional patch required for Controlling Layer components. Trial software missing some functionality.
Vendor support is good. Steep learning curve.
Out-of-the-box governance feature set. Complex device drivers will slow the initialization time.
Available AWS applications. Corrupt external code may slow initialization or hang a server.

Summary

Hyper V and VMware are both extremely powerful hypervisors on which you can run your enterprise data center production workloads. Each have various characteristics that make them unique. Each of these characteristics serve the basis on which many make the decision to go with one hypervisor or another for running their enterprise data centres.

]]>
https://networkinterview.com/hyper-v-vs-vmware-detailed-comparison/feed/ 0 14067
Type-1 vs Type-2 Hypervisors https://networkinterview.com/type-1-vs-type-2-hypervisors/ https://networkinterview.com/type-1-vs-type-2-hypervisors/#respond Sun, 14 Aug 2022 07:40:35 +0000 https://networkinterview.com/?p=12829 Type-1 vs Type-2 Hypervisors

Server virtualization is perhaps the sultriest point in the IT world today. It has been around for a long time, and its ubiquity continues developing, particularly in big business environments.

What makes virtualization conceivable in hypervisors?

Server virtualization permits distinctive working frameworks running separate applications on one server while as yet utilizing the equivalent physical assets. These virtual machines make it workable for a framework and system managers to have a committed machine for each service they have to run.

In addition to the fact that this reduces the quantity of physical servers required, yet in addition spares time while attempting to pinpoint issues.

What are Hypervisors?

Hypervisors are a pivotal bit of programming that makes virtualization conceivable. Fundamentally, hypervisors make a virtualization layer that isolates CPU/Processors, RAM and other physical assets from the virtual machines you make.

The machine we introduce a hypervisor on is known as a host machine, instead of visitor virtual machines that sudden spike in demand for top of them.

Hypervisors imitate accessible assets with the goal that visitor machines can utilize them. Regardless of what working framework you boot up with a virtual machine, it will feel that genuine physical equipment is available to its.

From a VM’s viewpoint, there is no distinction between the physical and virtualized condition. Visitor machines do not have a clue about the hypervisor made them in a virtual domain and that they share the accessible processing power. Since virtual machines run all the while with the software that forces them, they are completely subject to their steady activity. There can be two types of hypervisor:

  • Type 1 Hypervisor.
  • Type 2 Hypervisor.

Type-1 vs Type-2 Hypervisors: Difference Table

 PARAMETER

TYPE 1 HYPERVISOR

TYPE 2 HYPERVISOR

Terminology Run directly on System Hardware Run on host Operating System
Booting Boots before Operating system Cannot boot until Operating System is up and running
Other names Native/ Bare metal / Embedded Hypervisor Host OS Hypervisor
Efficiency Comparatively better Inferior
Support Hardware virtualization Operating system virtualization
Availability Comparatively better Inferior
Performance High Low
Security Comparatively better Inferior
Usage In Datacentre By Lab and IT professionals
Examples VMware ESXi and Citrix XEN Server KVM, Virtual Box, VMware Server and Microsoft Virtual PC.

Download the difference table: Type-1 vs Type-2 Hypervisors

Type 1 Hypervisor

It runs directly in the host’s hardware to manage guest operating system. There is a direct access of hardware and does not requires any base server operating system. It has better performance, scalability and stability but supported by limited hardware. Type 1 Hypervisor is called by names also i.e. Bare Metal Hypervisor or native Hypervisor. Based on its features, Type 1 Hypervisors are suitable or use in Datacentre environment

Type 2 Hypervisor

This type of hypervisor is hosted on the main operating system. Basically, a software installed on an OS and hypervisor ask the OS to make hardware calls. It also has the better compatibility with the hardware and its increased overhead affects the performance.

Type-1 vs Type-2 Hypervisors: Which one to pick?

Picking the correct kind of hypervisor carefully relies upon your individual needs.

On a macro level, 2 key considerations need to be taken into account while selecting the Hypervisor to be used.

  • First one is “SIZE” of the virtual environment where the hypervisor needs to run. For individual use and small organizations, you can go for one of the type 2 hypervisors, if financial limit is not an issue. Things get complicated in large business environments where we need to be more prudent and accordingly take a judicious call.
  • Second consideration is “Cost”. Despite the fact that type 1 hypervisors are the best approach, cost may play a big role in Hypervisor selection. This is where you have to give additional attention since cost might be per server, per CPU or even per center.

Numerous merchants offer various items and layers of licenses to suit any association. You might need to make a rundown of the prerequisites, for example, what number of VMs you need, maximum permitted assets per VM, specific functionalities, and afterward check which of these items’ best fits in. Note that trial period can be gainful when settling on a choice which hypervisor to pick.

Also refer: Top 5 Type-1 Hypervisors in Market

Watch Related Video

]]>
https://networkinterview.com/type-1-vs-type-2-hypervisors/feed/ 0 12829
XEN vs KVM : Type 1 Hypervisors https://networkinterview.com/xen-vs-kvm/ https://networkinterview.com/xen-vs-kvm/#respond Fri, 12 Aug 2022 16:48:57 +0000 https://networkinterview.com/?p=13450 XEN vs KVM

Talking about the virtualization concept, hypervisors technology is quite a well-known concept. A hypervisor is used by a person who would wish to merge the server space or run a number of other independent machines with the host server. Hypervisors add a touch of virtualization in a way that the data is controlled and managed centrally.

With the role of hypervisors expanding, the storage hypervisors are being used to form a centralized storage pond. Along with storage, networks are also being played with in a way that they are being formed, managed, manipulated or destroyed without even physically tampering the network devices. This is how network virtualization is being broadened.

Drilling deeper into the kinds come XenServer and KVM (Kernel-based Virtual Machine) as Type 1 hypervisors existing in the market. Since both being Type 1 Hypervisors, the question lies, which one is better? So let’s dive into the comparison part:-

XENSERVER:

An open source hypervisor recognized for its almost native performance, Xen hypervisor directly runs on the hardware of host. Xen allows formation, implementation and management of a number of virtual machines with a single host computer. Bought by Citrix Systems in 2007, XenSource created Xen. Commercial versions of Xen also exist.

Being a Type 1 hypervisor, Xen can be lodged directly on hardware of the computer without any requirement of a host operating system. Xen supports Windows and Linux operating systems. Xen can also be put to use on IA-32, ARM and x86 processors. Xen software is customizable because it has a unique structure getting virtualization everywhere.

XenServer is the first choice for hyper scale clouds of the industry like Alibaba, Amazon and Oracle Cloud and IBM Softlayer as it is easy to use with a flexible structure. An approach of detection and multilayered protection is used which makes Xen a secure option for usage. Xen’s architecture has advanced security features making it a leading choice in security related environments.

This hypervisor partitions the memory and also provides controlled execution for each virtual machine since the processing environment is commonly shared. This virtual solution is available in 64-bit hypervisor platform. Xen runs three virtual machines. A guest operating system and applications are run on each virtual machine thereby splitting the resources.

KVM (Kernel-based Virtual Machine):

A Linux inherent technology specifically converts Linux into a hypervisor that enables the host computer to operate a number of independent virtual systems also recognized as Virtual machines or guests. Initially disclosed in 2006, it merged with the mainline Linux kernel versions the following year. This open source virtual solution benefits from up to date features of Linux without the need of any additional skilful arrangement.

Being of any kind, hypervisors require some components of the level of an operating system to operate virtual machines like Input/output (I/O) stack, memory manager, process scheduler, security manager, network stack, device drivers and many more. All these components are contained by KVM since it is a part of Linux kernel. Linux gets converted into a native hypervisor through KVM and every machine is executed as a regular process of Linux organized by Linux Scheduler with committed virtual hardwares such as memory, disks, CPUs, network card, graphics adapter, etc.

To cut the explanation of its working short, you just need to install a version of Linux released after 2007 on X86 hardware which is capable of supporting virtualization. Then 2 modules, host kernel module and processor-specific module needs to be loaded along with emulators and helpful drivers which will run other systems.

Putting KVM into action on Linux based technologies like Red Hat Enterprise Linux- extends KVM’s capabilities like swapping resources, splitting shared libraries and more.

KVM is embedded in Linux so what Linux contains, KVM has it too. KVM is preferable because of its features like hardware support, security, storage, memory management, performance and scalability, live migration, scheduling and resource control, higher prioritization and lower latency.

To answer the question raised above, Xen is better than KVM in terms of virtual storage support, high availability, enhanced security, virtual network support, power management, fault tolerance, real-time support, and virtual CPU scalability. KVM is technologically stellar and contains some high quality uses but is still inferior to XEN.

Below list enumerates difference between XEN and KVM:

Continue Reading:

Hypervisor in Cloud Computing

Type-1 vs Type-2 Hypervisors

Top 5 Type-1 Hypervisors

]]>
https://networkinterview.com/xen-vs-kvm/feed/ 0 13450
Xen vs ESXi: Type 1 Hypervisors https://networkinterview.com/xen-vs-esxi/ https://networkinterview.com/xen-vs-esxi/#respond Thu, 11 Aug 2022 06:19:39 +0000 https://networkinterview.com/?p=18120 Xen and ESXi are unique form of type-1 hypervisors that are specially built for deploying and serving the virtual computers. There can be various kinds of similarities between the two, but here our aim is to bifurcate both of them in terms of the features that are contrary to one another.

XenServer

XenServer is a platform utilized by virtualization administrators for the purpose of hosting, organizing and handling VMs. It is even utilized to share all the hardware resources such as storage, CPU, networking, memory – to VMs. The key element of Xenserver has an objective to facilitate virtualization architecture supervision. VM templates are an important feature of this.

VMware ESXi

VMware ESXi Server is a kind of software system which is depended on computer virtualization by VMware Inc. The features of ESXi Server are: it is a smaller footprint edition having advanced qualities of the VMware ESX Server. ESXi is applied with VMware architecture and it is utilized for the purpose of organizing central supervision for business desktops and data center implementations.

Difference between Xen and VMware ESXi

Pricing:

As per the comparison done among the two of these with regard to expenditure, it turned out that distinct enterprise models are there that have been implemented by these servers. By the way, Xenserver is an open source which is entirely charge less and even offer license as per server.

On the other hand, a land holder license is required in ESXi and it is licensed with each processor. However, products do possess a considerable client subsequently dotted all over the world regardless of their expenses composition.

Host Server Limits

ESXi possess around 120 virtual machines in such a way that there is one single host with each virtual machine as well as the RAM with each host at 2048 GB with an entire 2048 virtual disks per host.

XenServer has a complete pack of 75 virtual machines per host. It has a RAM consisting 1024 GB per host and 512 virtual disks for every host.

Both the system acquires 160 logical CPUs with each host and the skill is there of possessing a complete of 2048 virtual CPUs with each host. Nevertheless, XenServer do not posses any virtual CPUs on the host.

Supported Host Operating Systems

The further aspects differentiating the two are actually backing host operating systems. With no question in mind, the Achilles heel which is a part of VMware ESXi is the kind of host operating systems helped by the program.

On the contrary, Xenserver even helped a lot of host operating systems like Novell Linux Desktop, Linux WS, Red Hat Linux, Linux ES and Red Hat Enterprise Linux AS. Other Operating systems comprises of Windows 95 and 98, Windows NT Workstation, Windows 2000 Professional and Server, Web and Standard Editions, Windows Me, Windows NT Terminal Server, Windows Server 2003 Enterprise, Professional editions and Windows XP Home.

Technical Help

Whether it is Xen server or ESXi, both of them promote a spectrum of technical support media like white papers, instructional videos, telephone, forums, knowledge base, system upgrades, online self-service etc.

They even vary in this field as VMware does not offer technical support through email, brochures, blogs and instructional booklet of the proprietor. However it significantly do possess a well-staffed support desk and even provide a remote training choice.

Citrix XenServer, on contrary, offer technical help via users’ instructional booklet, email, blogs and brochures – however does not offer this endorsement via a help desk or via remote training.

Technical Specifications

The Bare Metal (Type 1) hypervisor type is executed by both Xen software and ESXi software. These software programs help x64 and x86 architecture. Despite of the fact that they help several sorts of virtualization like para-virtualization and hardware dependent virtualization, only the VMware ESXi accomplish entire virtualization.

The Xen software system and ESXi software system endorse many depository choices. In terms of virtualization, the main difference between Xen and ESXi is that VMware just focus on promoting SSD and FCoE for Swap and never promotes iSCSI, SATA, SAS, USB, NFS – which are entirely endorsed by Citrix Xenserver. Both of them promote DAS, NAS and FC depository whereas none of them promotes eSATA or RDM. Both systems have even achieved plethora of users in healthcare sector, financial services, and the government area and education field.

Comparison Table: Xen vs VMware

Download the comparison table: XEN vs ESXi

Conclusion

While comparing Xen vs ESXi, we have drawn the line of difference in terms of the sharing qualities, the pricing, host server limits, technical help and specifications as well as the promotion and endorsement of the depository choices. When it is matter of recognition space and market presence, the VMware vSphere ESXi appears to be victorious over its counterpart. This information regarding both products can help you to easily find out which of the two excellently sets within your profession route.

Continue Reading:

XEN vs KVM : Type 1 Hypervisors

Type-1 vs Type-2 Hypervisors

]]>
https://networkinterview.com/xen-vs-esxi/feed/ 0 18120
Palo Alto Prisma SD WAN: CloudGenix SD WAN https://networkinterview.com/palo-alto-prisma-sd-wan-cloudgenix/ https://networkinterview.com/palo-alto-prisma-sd-wan-cloudgenix/#respond Sun, 03 Jul 2022 12:46:27 +0000 https://networkinterview.com/?p=17915 Introduction to Palo Alto Prisma SD WAN 

More and more organizations are moving towards hosting and running business applications in public cloud such as Microsoft Azure, Amazon AWS, Google cloud etc. Application hosting and running over public cloud has its own networking implications for remote and branch offices.

Organizations looking for a complete solution to build hybrid networks consisting of MPLS private WANs and commodity internet connections for adoption of cloud application, remote office high availability, application performance, and end to end visibility. SD-WAN solutions help to achieve a robust network with visibility into performance and availability for networks and applications.

Today we look more in detail about Palo Alto Prisma SD WAN (CloudGenix), learn about its architecture, features, advantages, quick facts etc.

 

About Palo Alto Prisma SDWAN (CloudGenix) 

CloudGenix was acquired by Palo Alto in the year 2020. The CloudGenix SD WAN is delivered by CloudGenix Instant-On Network (ION) devices which allows to enforce policies based on business intent, enables dynamic path selection, and provides visibility into performance of applications and networks. 

It is a secure application fabric, AppFabric, established among all ION devices, creating a virtual private network (VPN) over every WAN link. Policies are defined which are aligned to business requirements which specify compliance, performance and security rules for applications and sites. ION devices will automatically choose the best WAN path for application based on business policy and real time analysis of application performance metrics and WAN links. 

 

Prisma SD WAN Architecture

CloudGenix once deployed at sites, automatically ION devices establish a VPN to the data centers over every internet circuit. The ION devices establish VPNs over private WAN circuits which share a common service provider. We can define application policies for performance , security and compliance which is aligned to organization objectives. All aspects of configuration, management and monitoring of CloudGenix ION hardware and software devices from multi-tenant are managed via a single interface which is CloudGenix management portal. ( Refer above diagram)

 

Deployment Operating Modes

CloudGenix SD WAN can be deployed in one of the two operating modes – analytics mode and control mode.

  • In analytics mode ION device is installed into a new or existing branch site. ION device is placed between a WAN edge router and a LAN switch. The ION device monitors traffic and collects analytics which are reported to the CloudGenix portal. When sites are in analytics mode the ION devices do not apply policies or make path selection decisions for applications.
  • In control mode an ION device is installed on a new or existing branch site. You can either replace the WAN edge router with an ION device or place the ION device between WAN edge router and LAN switch. ION devices at branch level dynamically build secure fabric VPN connections to all data center sites across all WAN paths. Sites in control mode select the best path from the available physical and secure fabric links based on the applied network policies and enforce security policy for applications. 

 

CloudGenix SD-WAN supports 32 public and 32 private circuit categories, which can be customized to match organization’s requirements.

Features of Palo Alto Prisma 

  • Centralized control – the CloudGenix central controller software runs in the cloud as a virtual machine in the local network , or on a CloudGenix X86 box in the data center. It is the central point for all control, management, policy configurations, analytics and reporting for SD-WAN fabric
  • Traffic forwarding – ION elements of CloudGenix are flow forwarders, analogues to WAN routers which handle traffic forwarding with multi-gigabit rate. 
  • Secure application fabric – ION fabric is an overlay mesh of ION elements. The ION fabric contains one or more virtual networks and all traffic flows through fabric is encrypted with AES-256 IPsec for security of SD-WAN
  • Application fingerprinting – CloudGenix uses sessions flowing between endpoints to identify applications rather than using signatures or deep packet inspection technique which is not so reliable due to the increasing number of encrypted application payloads.
  • Sophisticated path selection – there are no routing protocols. A complex decision-making process is involving into consideration real world throughput , link capacity and performance needs of application 
  • CloudGenix policy manager – is simple is design and expresses complex business goals into simplified way
  • Traffic analytics – shows specific application flow information and offers performance and compliance reports

Quick facts !

As per MarketsandMarkets research firm forecast the SD-WAN market is expected to grow from $1.8 bn in 2020 to $8.4 bn by 2025

Palo Alto acquired a 5% market share player in 2020 (CloudGenix)

Continue Reading:

What is Multi Tenancy? Multi Tenancy Architecture

FortiGate SD-WAN Fundamentals

]]>
https://networkinterview.com/palo-alto-prisma-sd-wan-cloudgenix/feed/ 0 17915
What Is The Difference Between Cloud and VPS Hosting? https://networkinterview.com/difference-between-cloud-and-vps-hosting/ https://networkinterview.com/difference-between-cloud-and-vps-hosting/#respond Wed, 22 Jun 2022 09:07:38 +0000 https://networkinterview.com/?p=17831 In the hosting web sector, there is a lot of confusion regarding the technological differences between cloud hosting and VPS hosting. Most people misinterpret the fundamental distinction between the two due to the lack of understanding of the key distinctions between these web hosting systems.

With the introduction of virtual and remote operations, more established enterprises and even startups are choosing comparable options to host their sites. And since many reputable web hosting providers offer both VPS and cloud hosting, your decision might quickly become even more complex. 

Now, let’s dive into more details!

What Is Cloud Hosting?

Cloud hosting is by far the most advanced website (or app) hosting solution currently available. In a short amount of time, the technology has reached a phenomenal level of acceptability.

Cloud-hosted websites are available at all times and from any location. This implies that each website’s hosting resources are duplicated across all cloud servers in the cluster. For instance, if a cloud server is already at capacity, the request for the specified site is immediately sent to the cluster’s idle cloud server.

In other words, the cloud operates web hosting services such as data storage, SSH, FTP, SFTP, and email services on several servers simultaneously.

Pros:

  • Scalability – If you quickly want extra resources or access to greater bandwidth, you may obtain it automatically.
  • Pricing Flexibility – Using cloud hosting, you only pay for what you use. This differs from VPS hosting, where you pay for specified server space even if you don’t utilize it.
  • Redundancy and Quick Deployment – You may duplicate your site in various environments to decrease downtime even further.
  • Reliability – When one of the physical servers in the group dies, your site will not go down since the other servers will take over to display it.

Cons:

  • Security Is Not Assured – Because you are still sharing resources, what occurs to other sites utilizing the same hosting may harm your site. Because your website is also on the internet, it is still exposed to hackers – and cloud hosting doesn’t really change that. As a result, securing your site and hosting remains crucial.
  • The Learning Curve – Cloud hosting is not a simple alternative to implement, and it can be tricky even for technically savvy engineers. It is not impossible, but it is also not suitable for novices. However, there are fully managed hosting services that allow easy setup and maintenance of WordPress sites.

What Is VPS Hosting?

VPS hosting, commonly known as “Private Cloud,” is built on servers that have been built utilizing a virtualization technology.

The design employs numerous individually dedicated berths on the very same virtual machine. Every slot can be given to a certain resource. Nonetheless, the system operates on a time-shared or asset-shared basis.

One of the major drawbacks preventing the VPS hosting industry from progressing is its vulnerability, which can cause a specific slot or resource to go down in any crash scenario, rendering the app or website on that specific space inaccessible with no superfluous online accessibility unless and until the problem is settled.

However, VPS hosting has certain advantages in that it closes the gap between dedicated and shared hosting options.

Pros:

  • You Have More Assigned Resources – Because you are renting a greater percentage of the server, you have access to a much larger portion of the server’s assets than it is with shared hosting, which is yet another form of web hosting.
  • Complete Control Over All of The Configurations – In most circumstances, a VPS will provide you with far more control. Root access is usually present, as is the ability to examine all backup data and access all settings. If you don’t have access to anything, your hosting company is more likely to make a modification for you.
  • It Is Relatively Scalable – You can usually increase your package if you find you require additional resources, which you can do without having to relocate your site to a completely new server. However, because a VPS has limited resources, there will come a moment when there will be no more place on the server. You will be compelled to move if this occurs.

Cons:

  • Security Is Not Strictly Guaranteed – Because you’re sharing a server, anything other individuals on the server do may damage your site, especially if they are hacked. 
  • You Are Still Using The Same Server – Talking of sharing servers, regardless if you have a VPS, you are still using a physical server with the other users. As a result, you may not have access to all of the tools you require
  • It Can Become Technical – Whether you pick a managed or unmanaged VPS, they may necessitate certain technical expertise and abilities. However, some providers are user-friendly, so take note of this in your hunt for hosting if this is essential to you.
  • Reliability – When deciding between a VPS and a cloud host, a VPS is less dependable than cloud hosting since if the physical server fails, every VPS on that server will fail.
  • Scalability – It’s also worth noting that a VPS does not grow as well as cloud hosting. This is because, as previously said, VPSs have limited resources, which implies traffic surges might be a problem.

Final Thought: What’s Best For Your Website – Cloud or VPS?

The main distinction between the two server environments is size. If you want to get started quickly and don’t care about scalability, a VPS server might be a wonderful place to start. However, if you want a flexible hosting solution as well as a high degree of site speed and storage, a cloud hosting setup is worth investigating.

Cloud hosting gives you access to an almost limitless amount of server resources. Cloud hosting might be the ideal choice for businesses with fluctuating traffic levels or websites that are rapidly expanding. Cloud hosting provides excellent server power and complete flexibility in terms of resource utilization and price.

Ultimately, VPS hosting is an excellent choice for folks who wish to establish a website but have outgrown the limitations of their shared server environment. A VPS is strong and does enhance speed, making it an excellent alternative for any organization that requires (and values) the stability of a dependable server. 

Continue Reading:

What is VPS (Virtual Private Server)?

Public vs Private vs Hybrid vs Community Clouds

]]>
https://networkinterview.com/difference-between-cloud-and-vps-hosting/feed/ 0 17831
Top 10 White Box Networking Vendors https://networkinterview.com/top-10-white-box-networking-vendors/ https://networkinterview.com/top-10-white-box-networking-vendors/#respond Thu, 09 Jun 2022 17:07:18 +0000 https://networkinterview.com/?p=17789 SDN technology detaches dependency on binding both hardware and software. Starting with the SDN network 10 years back, a number of start-ups started to develop open networking systems and white box switches for data centres.

In this article we will look at some white box networking vendors which made their place in the top 10 and changed the networking landscape drastically. Understand their strengths and features.

List of  Top 10 White Box Networking Vendors

Big Switch Networks

Big switch networks was founded in 2010. It was known for its Floodlight SDN controller which got open sourced in 2012. Big cloud fabric is a major product from big switches. It offers Virtual private cloud (VPC) based logical networking, delivering automation of networks and visibility to both on premises and multiple cloud workloads. It provides consistent network management and capabilities for operations management. Some of the key customers of big switch networks are Verizon, VMware, Visa and T-Mobile.

Cumulus

Cumulus was founded in 2010. Cumulus is a pioneer in open network operating systems for Whitebox switches. It allows the automation, customization and scale of data center networks. It provides 100+ hardware platforms including industry standard open compute platform (OCP) and ONIE (Open networking Install Environment). Cumulus is the first white box software provider to add support for Minipack, Facebook’s latest OCP compliant reference design.

Pica8 –

Pica8 was founded in 2009. Key product offering from Pica8 is PICOS, an open networking software. PICOS offers tightly integrated / coupled control planes, giving network operators non-destructive control of their enterprise’s networks; deep and dynamic traffic monitoring and attack mitigation in real time. PICOS provides functionality called CrossFlow which tightly couples the L2/L3 control plane and classic ‘SDN’ control plane for real time network operations. Some of its key customers are General Electric, stratedge and Edge core.

Plexxi –

Plexxi was founded in 2010. Key product is Plexxi switch which enables customers to build public and private clouds. This product post acquisition is getting integrated with HPE composable Fabric ecosystem. Plexxi supports composable infrastructure which can be composed or recomposed as per the need based on network loads. Some of its customers are Arrow electronics, SafeGuard Scientifics and Jefferies Financial group

Pluribus –

Pluribus was found in 2010. Its key product offering is Netvisor ONE Operating system which is layer 2/3 switching optimized for meeting requirements for distributed enterprise and service provider networks. It supports peer-to-peer architecture, and eliminates the need for an SDN controller and simplification of overall network architecture. It supports open compute platform (OCP) and open network install environment (ONINE) standards. Some of its key customers are Cloudflare, Steelcase and Tibco.

NoviFlow

It was founded in 2012. Its key product is NoviWare network operating system software which can be deployed on network switches, WAN IP/MPLS routers, network appliances, and high bandwidth forwarding planes. It is compliant to OpenFlow 1.3/1.4 and 1.5 protocol versions. Its key customers are Fortinet and Lumina networks.

Arrcus –

Arrcus are manufacturers of white box switches running ArcOS latest generation switches that leverage Jericho-2 chipset. It supports multi-tenancy at scale, open integration across multiple ODM vendors. It offers hardware agnostic platform which is largely deployed in data centre fabrics, large scale peering/ edge deployments, and cloud.

Kaloom –

Kaloom is a software solution for white boxes which are meant for hyper scale and distributed data centres. It supports integrated routing and switching and enables developers to develop new code to add new features and services. It has low latency multi datacenter fabric, self-forming and self-discovery capabilities, zero touch provisioning of virtual networks and automated software upgrades.

Snaproute –

Snaproute was found in 2015. It created an open-source software FlexSwitch to the open compute project (OCP). It runs on industry standard white box switches, provides all management and networking functionality to simplify networking stack. It automate network provisioning

iPinfusion –

iPinfusion offers OcNOS industry first full featured network OS for white box it supports advanced capabilities such as extensive switching and routing protocol support. MPLS and SDN. It has hybrid, centralized or distributed network support; scalable, modular high performance network support and a robust data plane.

Continue Reading:

What is White box Switching?

Basics of SDN and Open Flow Network Architecture

]]>
https://networkinterview.com/top-10-white-box-networking-vendors/feed/ 0 17789
What is White box Switching? https://networkinterview.com/what-is-white-box-switching/ https://networkinterview.com/what-is-white-box-switching/#respond Wed, 08 Jun 2022 17:49:21 +0000 https://networkinterview.com/?p=17782 Network switch is the main component of telecommunication networks especially in the case of fiber optics networks. In the traditional switch choice of a specific vendor switch you are bound by software provided by the vendor. The switch market is monopolized for several years with vendors such as Cisco, HP, Juniper etc. This trend was recently broken by a new type of switch called white box switch. 

Today we look more in detail about Which box switching or white box networking , its features, advantages , use cases etc.

 

White Box Switching 

Due to the complexity of traditional networks, evaluation of white box switching (or white box networking) has come as an ideal solution for hyper scalable data centres. Some of the limitations of traditional networks are:

  • Complicated protocols to grow networks
  • Manual configurations have limitations 
  • VLANs do not scale and cannot be reused across data centres
  • STP is unstable, comes with proprietary extensions which makes is incompatible, and lot of ports are blocked
  • Troubleshooting issues is a pain as skilled personnel required to assist here
  • The network becomes the stopper for business most of the times

 

White box switching is a new form of networking model where hyper scale data centres adopt commoditized networking by using white box switches providing investment protection by avoidance of vendor lock-in. 

  • White box switch is independent of hardware and it can use the software from any other provider. This allows them to build and set up flexible network design for their network and switches.
  • White box switches are most popular in SDN (Software Defined network). White box switches can be programmed to create routing tables and route connections using OpenFlow protocol or another south bound API in SDN environments.
  • White box switches are low in cost as compared to traditional switches and popular for both large data centres and small networks.
  • White box switches have high port density. 
  • A white box switch comes preloaded with minimal software or may be sold as a bare metal device. These switches can be customized to meet organization specific business and networking requirements.
  • White box switches are used to support a wide range of open-source management tools such as OpenStack , Puppet and Chef. 
  • Most white box switches adopt ‘open’ Linux based NOS (Network operating system) which is designed to be separated or segregated from the underlying hardware, letting the user change hardware box or NOS as per the will. They rely on an operating system which might come preinstalled to integrate layer 2 / layer 3 topology and support basic networking features. 
  • These are commodity / cheaper switch boxes built on merchant silicon by Taiwanese manufacturer known as original device manufacturers (ODM) such as Accton, Quanta and Alpha etc. Small start-up companies like Cumulus networks, Big Switch networks, Pica8 etc. buy bare metal switches from Taiwanese ODMs and load their operating system and sell these switches as white boxes. Like Cumulus uses Linux operating system, Pica8 uses PicOS and Big switch uses Switch light OS. 

 

Advantages of White Box Switch 

  • They are simple to operate
  • They are flexible and independent of underlying hardware
  • Limited features by high performance is guaranteed
  • Fabric architecture enable multiple switches to act as single unit
  • Segmentation and security of network virtualization
  • API driven network automation 

 

Use cases of White Box Switching

  • Big companies like Facebook, Amazon and Google need massive deployment of switches in their large data centres. The number of ports requirement is quite high , white box switch fit perfectly due to high port density 
  • Web scale companies looking for flexibility and openness in their switch platform where white box switches fit swiftly

Continue Reading:

OpenFlow vs Netconf: Which is the Best Protocol to Program?

Basics of SDN and Open Flow Network Architecture

]]>
https://networkinterview.com/what-is-white-box-switching/feed/ 0 17782
What is Multi Tenancy? Multi Tenancy Architecture https://networkinterview.com/what-is-multi-tenancy/ https://networkinterview.com/what-is-multi-tenancy/#respond Fri, 03 Jun 2022 11:08:42 +0000 https://networkinterview.com/?p=15487 Introduction to Multi Tenancy

Advent of cloud computing had brought many new models of delivery and services in Infrastructure services, Application services, Network services and so on. Cloud computing provides a cost-effective way to use computing resources, share applications, share databases, share network resources etc. 

In the software industry, customers could have diverse requirements and if software products are implemented according to every customer’s needs separately and delivered then implementation timelines would be more, and maintenance would also become cumbersome. Multi tenancy comes as a solution to provide solutions to all problems faced by software providers to satisfy diverse customer needs. 

One such term which is very popular in cloud computing environment is ‘Multi Tenancy’.

What is Multi Tenancy?

Multi Tenancy is an architecture which supports a single instance of a software application to be used by multiple users called ‘Tenants’. Cloud computing architecture has broadened the multi tenancy architecture due to new service models which are using technologies like virtualization and remote access services. Software as a service (SaaS) model runs one single instance of software application on a single instance of database and provides access via web to multiple users. Multi-tenant data is separated and remain invisible to other users (tenants). 

The users or tenants have some flexibility to modify the look and feel of the application presentation interface, but they can’t customize the source code of the application. The tenants operate in a shared environment and they are physically integrated but there is a logical separation due to which each tenant data is stored separately on a common storage in the cloud.

Multi tenancy is supported both on Public and Private cloud architecture. Multi tenancy architecture gives a better Return on Investment (ROI) as it brings down costs and aid in quicker maintenance and updates. 

Multi Tenancy Architecture

Multi tenancy architecture can be divided into three categories based on its complexity and costs. We will learn about them more in detail in consecutive sections. 

A single , shared database schema – this is the simplest form of multi tenancy model which has relatively low costs due to use of shared resources. This form uses a single application and a single database instance to host tenants and for data storage. Scaling is easier but operational costs are high. This is also known as the ‘Shared Everything’ model. All resources are shared equally between all tenants. There are some inherent risks in this model such as implementing such a complex model could be challenging for all resources at all levels, business risks could arise due to data sharing between users, limited scope for backup and restore functionality, load balancing and distribution is complex.  

A single, multiple databases schema – use a single database with multiple schemas, this model has multiple tenants, but databases are individual for each tenant which are identified by unique IDs assigned to each customer. It involves high costs due to additional overheads to manage individual databases. This kind of architecture is useful when tenants are spread geographically and require data from different tenants to be treated differently as per geographic regulations.

Multi-tenant architecture with multiple databases – is a complex model as it involves management and maintenance high costs. Highly suited for multi-tenant SaaS where multiple schemas and restrictions are implemented at database level to have more closed interactions. 

Multi tenancy features in many cloud computing models including SaaS, PaaS, containers and serverless computing , public cloud computing. 

Some of the popular examples of multi tenancy applications are Gmail, Good drive, Yahoo etc.

Multi Tenancy Pros and Cons

Pros:

  • Less expensive
  • Pricing model based on pay to what you need offerings
  • Updates are pushed through host providers
  • Hardware is managed by provider
  • Single system maintenance and administration
  • Easily scalable architecture 
  • Variety of licensing models and membership options 
  • Personalized and customized reporting 
  • Custom end user sceneries and preferences 
  • Same software version across all customers
  • Application can be tailored to specific business needs without expensive , time consuming, and risky customer development 

Cons:

  • Limited interoperability between providers 
  • More complex architecture
  • Response time could be slow due to nosy neighbours as same CPU consumes a lot of cycle (shared)
  • Downtime issues depending on provider
  • Data security concerns

 

Quick Facts!

The global market for multi-tenant LMS (SaaS) is expected to grow $30 billion by 2026. 

Continue Reading:

Telco Cloud Architecture

Public vs Private vs Hybrid vs Community Clouds

]]>
https://networkinterview.com/what-is-multi-tenancy/feed/ 0 15487
Approaches of Multi Tenancy in Cloud https://networkinterview.com/approaches-of-multi-tenancy-in-cloud/ https://networkinterview.com/approaches-of-multi-tenancy-in-cloud/#respond Mon, 30 May 2022 11:16:14 +0000 https://networkinterview.com/?p=15593 Introduction to Multi Tenancy in Cloud

Multi Tenancy offers customers , organizations and consumers which are sharing infrastructure and databases to gain advantages on price and performance front. Service offerings available through cloud involve customers (or ‘tenants’) getting a piece of cloud which contains the resources required to run their businesses. 

Tenants may share hardware on which their virtual machines or servers run, or they may share database tables where data is stored for multiple tenants. However, security measures are mandatory to ensure tenants don’t pose risk to each other in terms of data loss, misuse, or privacy violations. 

In cloud services be it SaaS, LaaS, PaaS services offered by providers. A tenant is a person who occupies land or property by giving rent. In a Multi-tenancy architecture, there are multiple occupants and tenants who share software, but each consumer is unaware of the other tenant. It allows one instance of an application to serve multiple customers by resource sharing. Independent customers are serviced. The security of data is of prime concern as data is the key factor between customer and provider. The data architecture in a multi-tenancy model should be robust, secure, efficient, and cost effective.

Impact of Multi tenancy 

Master data support – master data is shared instead of replicating for every tenant for cost reduction. Data is shared across organizations and may be modified by tenants as per their requirement and DBMS should support those changes privately to each tenant’s needs.

Application modifications and extensions – applies to database schema and its master data . This is required to tailor application to tenant specific requirements. Limited form of customization or ability to modify database schema of the application or building an extension to be sold as an add-on to base application as per tenant business need.

Schema evolution and master data – when applications are upgraded , offered as self-service to contain operational costs, and require minimal amount of interaction between service provider and tenants. 

Approaches of Multi Tenancy in Cloud

There are three approaches to manage Multi-tenant data as under : –

  • Separate database process, shared system
  • Shared database process, separate tables
  • Shared tables 

1. Separate database process, shared system 

The system is shared, and each tenant gets its own database process. Computing resources and application code are shared among all tenants on a server, each tenant has its own set of data which remains logically isolated from other tenant’s data.  

Separate database process, shared system: Pros and Cons

PROS:

  • Easy to implement 
  • Can be customized for each customer and vendor can apply new releases or customization as per customer requirement

CONS:

  • Heavy on resources 
  • Every customer has unique configuration and vendor need to manage them all 
  • Expensive in terms of management, integration, updates, customer support etc 

2. Shared database process, separate tables 

Every tenant has its own table and multiple tenants share the same database process. Multiple tenants are housed in the same database, each tenant having its own table which are grouped into a schema specific to a tenant. 

Shared database process, separate tables: Pros and Cons 

PROS:

  • Low licensing costs 
  • Low cost of maintenance 
  • Less memory consumption 
  • Smaller administration team 

CONS:

  • Complex management 
  • Collaboration and analytics are problematic 

3. Shared tables  

In this model the same database and same tables are used for hosting multiple tenant’s data. A table would include records from multiple tenants stored data in random order and each tenant record is identified using a tenant ID column , as record is linked to its appropriate tenant. 

Shared table: Pros and Cons 

PROS:

  • Updates apply automatically to every tenant at the same time 
  • Application vendor can easily support any software capabilities which require a common query to be triggered across multiple tenants 
  • It is simple to manage as there is only one version of production at any point of time
  • Easier integration with back end infrastructure  

CONS:

  • Isolation issues
  • Security concerns and vulnerable to certain attacks like SQL injection etc.
  • Slowness issue 

There are economic considerations when choosing the approach , a shared approach with optimized application requires larger development efforts than applications designed using an isolated approach (Due to complexity of shared architecture) which would result in higher initial costs however ongoing operational costs tend to be low.

Continue Reading:

What is Multi Tenancy Architecture?

Public vs Private vs Hybrid vs Community Clouds

]]>
https://networkinterview.com/approaches-of-multi-tenancy-in-cloud/feed/ 0 15593
DevOps vs DevSecOps: Understand the difference https://networkinterview.com/devops-vs-devsecops/ https://networkinterview.com/devops-vs-devsecops/#respond Sun, 01 May 2022 14:35:35 +0000 https://networkinterview.com/?p=17570 Software development and operations teams are always striving to establish a consistent environment for development globally. The products are brought from hands of developers to customers however the existence of silos between development, QA and operations teams always create conflicting interests and it is doubled when requirements of security are also required to become an integral part of software development. 

Today we look more in detail about DevOPs and DevSecOps strategy and principles, how they work, what are their advantages etc. 

 

About DevOps 

‘DevOPs’ is a combination of two words ‘Development’ and ‘Operations’. It represents a set of principles, ideas, practices and tools which helps an organization to increase its ability to deliver applications and services with improved efficiency and at a much faster pace instead of traditional methods of development. It is a software development strategy which aims to bridge the gap between development teams and IT operations teams. The IT operations team and development team work in collaboration during the entire software development lifecycle to produce better, reliable applications and products. 

Advantages of DevOps 

  • Improved customer satisfaction and retention
  • Business efficiency improvement
  • Improved response times
  • Reduction in costs over time
  • Improved business agility 

About DevSecOps 

The DevOPs offers speed and quality development and deployment but it does not cater to needs of security. The focus on security has increased tremendously hence DevSecOps has come into picture. DevSecOps optimizes the DevOPs strategy by security automation and implementation. It breaks silos between development teams and security teams and empowers development teams not only to be responsible for performance of applications in production but also accountable for product security in production.

The goal is to focus on security requirements right from the beginning of software development life cycle and provide built in security practices throughout the integration pipeline. 

Developer responsibilities in DevSecOps 

  • Composition analysis in conjunction with security to choose safe third party and open-source tools
  • Static and dynamic analysis of code along with automated vulnerability scans and penetration tests
  • Automated tests alongside functional tests and check and verify against improper security configurations
  • Threat modelling adoption to understand how attackers think and operate 

Comparison: DevOps vs DevSecOps

Below table summarizes the differences between the two:

Function

DevOps

DevSecOps

Definition DevOPs is a combination or unification of “Development’ and ‘Operations” . It is a set of principles and strategies to increase organization ability to deliver application and services with improved efficiency and at a faster pace DevSecOps is a combination or unification of ‘Development’ , ‘Security’ and ‘Operations. It focuses on incorporation of security practises in development
Methodology It is a software development methodology which aim to bridge gap between development and IT operations team and hence their collaboration during entire software development lifecycle It is a methodology which is integrated into DevOPs process to incorporate security at every stage of software development process
Goal Breakdown organization silos and adoption of a culture where people work together by developing and automating continuous delivery pipeline Goal is to move security activities throughout the software development lifecycle and provide built in security practises throughout the continuous integration pipeline
Approach Based on cultural philosophy which support agile movement in the context of system-oriented approach It is about validating all building blocks and embed security in architecture design
Elements CI/CD are critical elements of DevOps , automation of code development process let teams change code more frequently and in reliable manner Shift left is most critical element in DevSecOps making security core responsibility for each and everyone involved in development and identifying issues early and fixing them quickly
Tools Ansible, Docker, Puppet, Jenkins, Chef, Nagios, Kubernetes etc. Aqua Security, Codacy, Checkmarx, Prisma cloud, Threatmodeler etc.

Download the comparison table: DevOPs vs DevSecOPs

 

Continue Reading:

SecOps vs DevOps: Understand the difference

Cloud Engineer vs DevOps Engineer : Future of 21st Century

Top 10 DevOps Tools

]]>
https://networkinterview.com/devops-vs-devsecops/feed/ 0 17570
Serverless vs Terraform https://networkinterview.com/serverless-vs-terraform/ https://networkinterview.com/serverless-vs-terraform/#respond Fri, 29 Apr 2022 11:03:05 +0000 https://networkinterview.com/?p=17561 Infrastructure as a code is one of the greatest changes brought by DevOps in recent times. Complicated provisioning, deployment and release of cloud infrastructure processes which required following complicated steps where cloud resources were created manually is a thing of the past. Infrastructure as a code (IaC) allows to automate provisioning of cloud resources in a consistent and standardized manner. IaC lets you declare your infrastructure using code or configuration files and instructions to create/provision all resources on cloud providers like AWS. There are a wide range of tools to deploy IaC. 

Today we look more in detail about two important terminologies Serverless computing and terraform understand their key differences, benefits and purpose they adopted for.  

About Serverless Computing

Server less model helps to provision and deploy server less functions across different cloud providers. Server less computing provides backend services on as used basis and users can write and deploy code without worrying about infrastructure underneath. Backend services are charged on the basis of actual computation usage. There is no reservation of any sort in terms of bandwidth, number of servers as the service operates in auto scale mode.

Though by name it is server less, physical servers are there in underlying infrastructure which developers are not aware of. Backend services consist of a server where application files reside and database where user and business data is stored.

Serverless computing offers Function as a service (FaaS), like Cloudflare workers , it allows developers to execute small pieces of code on the network edge. With FaaS developers can build modular architecture , a scalable codebase without the need to spend resources to maintain the underlying backend infrastructure. 

Benefits of Serverless Computing 

 

  • Very cost effective as compared to its counterpart traditional cloud providers of backend services as users end up paying for unused or idle CPU times
  • Developers need not to worry about scaling up their code as it can handle all application scaling demands
  • Simple functions can be created by developers in FaaS to perform a single independent function such as call to an API
  • Significantly can cut time to market as code can be added and modified on piecemeal basis

 

About Terraform 

‘Infrastructure as a code’ tool is one of the most popular IaC tools with huge providers. It allows to configure and provision infrastructure. Terraform is an open-source infrastructure as a code software which provides a consistent CLI workflow to manage multitude of cloud services. It codifies Cloud APIs into declarative configuration files. It can be used with any cloud provider.

Benefits of Terraform

 

  • It supports orchestration not just configuration management
  • Supports multiple providers such as AWS, Azure, GCP, DigitalOcean etc.
  • Provides immutable infrastructure to support smooth infrastructure changes
  • Easy to understand language , HCL (Hashicorp configuration language)
  • Supports portability to any other provider
  • Supports client only architecture no need for additional configuration management on server

 

Comparison Table: Serverless vs Terraform

Below table summarizes the differences between the two:

Function

Serverless computing

Terraform

Scope Cross platform support via service providers Covers most AWS resources and part of it offers faster support for new AWS features
License Open source and enterprise It is an open-source project
Support Based on subscription model chosen Hashicorp offers 24*7 support
Type It is a framework and stateless It is an orchestration tool and stateful application
Language Java, Go, PowerShell, Node. js JavaScript, C#, Python, and Ruby It uses declarative language
VM provisioning , network and storage management Offloads all backend management responsibility and operations tasks such as provisioning, scheduling, scaling etc. It offers comprehensive VM provisioning, network and storage management
Services AWS Lambda, Microsoft Azure Functions, Google Cloud Functions and IBM OpenWhisk Terraform is open-source infrastructure as code software tool

Download the comparison table: Serverless vs Terraform

Continue Reading:

Server less Computing vs Containers

Server less Architecture vs Traditional Architecture

]]>
https://networkinterview.com/serverless-vs-terraform/feed/ 0 17561
Serverless Computing vs Containers https://networkinterview.com/serverless-computing-vs-containers/ https://networkinterview.com/serverless-computing-vs-containers/#respond Tue, 12 Apr 2022 15:03:32 +0000 https://networkinterview.com/?p=17491 Introduction

In the era of enterprise computing organizations are seeking ways to develop, deploy and manage applications in a more efficient , fast and scalable manner. Traditionally applications are tightly linked to servers and the operating systems which executed their code. Organizations spend a lot of time, efforts and money to allocate and manage their systems and organizations want to expand their minimal resources to maintain computer infrastructure.

Virtual machines have dramatically increased server efficiency. Instead of deploying one server for each application, multiple applications can be run on a single computer. Containers are the next level of server virtualization. They help to address the limitations which come with virtual machines such as reduction in portability and inadequate resources. 

Today we look more in detail about two important terminologies Serverless computing and containers, understand their key differences, benefits and purpose they adopted for.  

 

About Serverless Computing

Serverless computing is a method of giving backend services on a used basis. Serverless providers allow users to write and deploy code without any hassles or worry about the underlying infrastructure.

Companies taking backend services from provider are charged on the basis of their computation need and do not have to reserve and pay fixed amount of bandwidth on number of servers, the service is auto scaling in nature. 

 

About Containers

Containers have both application and all its elements which application requires to run properly which includes system libraries, system settings, and any other dependencies. Containers require only one thing to be hosted and run in order to perform their function. 

Any kind of application can run in a container. A containerized application will run the same way as in traditional server model irrespective of location they are hosted and can be easily moved and deployed wherever needed similar to physical ship containers and are of standard size and can be shipped anywhere via a variety of means of transport regardless of what they hold or contain.

Containers are a way to partition machine, servers into separate user space environments so that each environment runs only one application and does not interact with any other partitioned sections on the system. Each container shares a machine kernel with other containers but it runs as if it is there on the system alone.

Comparison Table: Serverless Computing vs Containers

Below table summarizes the difference table the two terminologies:

PARAMETER

SERVERLESS COMPUTING

CONTAINERS

Operations Serverless computing runs on servers but it is serverless vendor to provision server space as required by the application Containers live on one machine at a time and use the operating system of that machine
Scalability Serverless architecture is based on pay as you use model , so you can get resources and take them as per your requirements In container based architecture the number of containers deployed is pre-determined by the developer.
Costs There is no continued expenses in case of serverless computing because application code does not run unless explicitly called. Instead developers charged for server capacity of their application based on actual usage Containers are constantly running therefore cloud providers charge for server space even if application is not used sometimes
Maintenance A serverless architecture has no backend to manage and vendor takes care of all management and software updates for servers which run the code Containers are hosted on cloud but cloud providers do not update or maintain them developers have to manage and update each container as they deploy them
Time of deployment Serverless applications are live as soon as code upload happens Containers take longer to setup initially as it is necessary to configure system settings , libraries etc. once configured containers takes few seconds to deploy
Testing It is tough to test serverless applications as backend environment is tough to replicate on local environment Containers run same no matter where they are deployed it is easy to test a container based application before its deployment into product environment

Download the comparison table: Server less Computing vs Containers

Conclusion

Developers can choose between serverless architecture or containers depending on their requirements. Choice of serverless architecture will let developers to release and iterate new applications in a faster manner without worrying about its scalability needs. Container based architecture will give developers more control over the environment the applications are hosted and run on and ideal for legacy applications or one can also look for hybrid architecture as per the need. 

Continue Reading:

Serverless Architecture vs Traditional Architecture

Top 10 Serverless Compute Providers

]]>
https://networkinterview.com/serverless-computing-vs-containers/feed/ 0 17491
Introduction to NSX Intelligence https://networkinterview.com/introduction-to-nsx-intelligence/ https://networkinterview.com/introduction-to-nsx-intelligence/#respond Mon, 21 Mar 2022 10:11:41 +0000 https://networkinterview.com/?p=17421 Data centres operations be it from cloud or on premises network is the backbone of these infrastructures which supports businesses starting from application delivery to security. As workloads are changed, migrate, evolve networking technologies too need to match the pace. Issues of ransomware, data loss and data theft are on rise and traditional approach to borderline security using traditional firewalls over the edge had shown some serious vulnerabilities once breached.

In this article we will learn more about NSX, VMware introduced zero trust security, its features, use cases and so on.

About NSX Intelligence

Many organizations use combination of security systems such as intrusion detection or prevention systems, or other traffic analysis engine to monitor data transfer at key points. As speed of network on rise so is the cost of intrusion detection and intrusion prevention systems and resources required to monitor the traffic is also grown up exponentially.

NSX offers an intelligent analytical security policy engine which applies policies all across hosts in a distributed manner rather than monolithic approach to intrusion detection. This eliminates choke points and enable all hosts to share the analytical load by ensuring all traffic is monitored without affecting any specific host or workload.

Microsegmentation in NSX

NSX intelligence takes Microsegmentation to next level. It examines what VMs talk to each other rather than just dictating rules regarding what they are talking. NSX intelligence scales horizontally rather than vertically so it can scale without limit or restricting production traffic.

Application delivery and Load balancing in NSX

NSX advanced load balancer is an acquisition from AVI networks in year 2019. It is a software defined platform which provides multi-cloud balancing, application acceleration and caching across bare metal servers, Virtual machines and containers.

Features of NSX Intelligence

  • Provides a user interface via a single management plane with NSX manager
  • Close to real time flow information for workloads
  • Correlates live or historic flows, user configurations and workload inventory
  • Ability to view past information on flows, user configurations and workload inventory
  • Automated Micro segmentation planning by recommendation on firewall rules, groups and services
  • Helps to visualize and gain insight into every flow across data center with stateful layer 7 inspection
  • Reduction in tool sprawl and improved collaboration between infrastructure and security teams
  • Single click deployment for virtual appliance
  • Elimination of overhead of duplication of packets
  • Inventories of all endpoints and traffic flows and consolidation of metadata and configuration data from NSX
  • Auto generation of rules to micro segment applications
  • Packet processing and workload analysis is distributed to each hypervisor

 

How NSX works?

The NSX intelligence data platform gets stream from NSX-T manager and from the ESXi hosts which are prepared as NSX-T transport nodes. Flow send to every 5 minute interval which means flow and guest information is distributed and optimized directly at source and agent is optional. The NSX intelligence appliance will deployed from NSX-T manager GUI and is managed and monitored from there.

 

Installation of NSX

NSX intelligence requires NSX-T version 2.5. The NSX intelligence comes as .tar file and contents required to be extracted and placed on a web server which can be accessed by NSX manager cluster.

The NSX intelligence appliance must be deployed on ESXI managed by vCenter.

In NSX manager navigate to Plan & troubleshoot > Discover & take Action:

Click on Go To System scroll down on appliances page and click Add NSX intelligence appliance to start appliance deployment wizard:

Enter the URL for OVF file and appliance network configuration

Configure vSphere details for the virtual appliance

Configure Appliance credentials at final step

Click install appliance, deployment will take approximately 5-10 minutes

Continue Reading:

What is VMware Horizon?

Hyper V vs VMware : Detailed Comparison

]]>
https://networkinterview.com/introduction-to-nsx-intelligence/feed/ 0 17421
Qualys Scanner: Vulnerability Management https://networkinterview.com/qualys-scanner-vulnerability-management/ https://networkinterview.com/qualys-scanner-vulnerability-management/#respond Mon, 21 Feb 2022 11:12:04 +0000 https://networkinterview.com/?p=17275 Vulnerability management is a critical component of risk management as attackers are always looking for new vulnerabilities to exploit and take advantage of them which may have gone unnoticed. Several Vulnerability management tools are available in the market which help in continuous monitoring and identifying vulnerabilities well in advance helps to secure organizations in a proactive manner. 

Today we will learn about a vulnerability scanner ‘Qualys’ which is quite a popular and preferred choice for organizations on vulnerability management, its features , use cases etc.

About Qualys Scanner

Qualys offers vulnerability management as a Software as a service product. Vulnerability management and scanning products are deployed as software as a service (SaaS) or pre configured private cloud appliances by service providers.  It scans network perimeter, virtual machines and cloud services based on preconfigured or customized policies to identify and prioritize fixing vulnerabilities. Virtual appliances are available for VMware, Hyper-V and Amazon Elastic compute cloud. 

Usability

  • Qualys scanner solution lets discover, assess, and patch critical vulnerabilities in real time and across hybrid global IT landscapes. 
  • It helps to identify and categorize all known and unknown assets , detect, and analyse vulnerabilities and misconfigurations , prioritize automated remediation and patch deployment.
  • It enables auto discovery and categorizes known and unknown assets and creates automated workflows to manage them. Users can query assets and its attributes to have more visibility into hardware, system configuration, applications, services , network information and many more. 
  • It automatically detects vulnerabilities and critical misconfigurations across the widest range of devices, operating systems, and applications.
  • It gives real time threat intelligence and a machine learning model to automatically prioritize vulnerabilities with indicators such as exploitable , actively attacked, and high lateral movement.
  • It assigns business impact to each asset like devices which are mission critical applications, public facing, accessible over internet etc.
  • Policy based automated patching helps to deploy superseding patches. 

Features of ‘Qualys Scanner’

  • Asset categorization and normalization
  • Creation of interactive network map to show perimeter and internal devices
  • Zero-day threat analysis and alerts
  • Remediation prioritization by assigning business impact to each asset
  • Continuous monitoring of perimeter for unexpected changes 
  • Dynamic tagging of assets to automatically categorize hosts by attributes like network address, open ports , OS, software installed, and vulnerabilities detected 
  • Track vulnerabilities over time
  • Monitoring of certificates deployed throughout the network 
  • Automatic generation of remediation tickets whenever vulnerabilities are detected
  • Consolidated report of hosts require patching
  • Integration with 3rd party ticket systems
  • Scalable and extensible 

Continue Reading:

Nessus: Network Vulnerability Scanner

Observium – Network Management & Monitoring

]]>
https://networkinterview.com/qualys-scanner-vulnerability-management/feed/ 0 17275
A quick look into the Evolution of Cloud Computing https://networkinterview.com/evolution-of-cloud-computing/ https://networkinterview.com/evolution-of-cloud-computing/#respond Wed, 05 May 2021 15:08:26 +0000 https://networkinterview.com/?p=15480 Introduction to Evolution of Cloud Computing

Cloud computing is playing an important role in everyone’s life. We cannot imagine a world without Cloud Computing techniques to handle Big Data. But we didn’t reach this level in few years. 

The evolution of Cloud Computing can be traced back to the 1950s. Are you surprised? Today in this article, you will witness the development the cloud computing in three different phases. 

Okay without further ado, let’s start the article. 

The 3 Phases: Evolution of Cloud Computing

Phase 1: Idea Phase

The idea Phase can consist of the technical development before the pre-internet era. Let’s see how it all started.

Distributed Computing:

Distributed computing is where multiple independent systems are connected so that they seem to be a single entry to the users. It has the basic quality of cloud computing like scalability, concurrency, continuous availability, etc… However, the main problem is the need for the same Geographical location for all the computers. 

Mainframe Computing:

It is something that is still in use. The mainframe is a large computer that has high processing power and storage facility. Here instead of the multiple systems, one single supercomputer is depicted as the multiple systems in the user end. Though it gave larger Geographical coverage and computing power still we were hindered by the Geographical location. 

And this led to the next marvelous invention Cluster computing. However, till today Mainframe helps use in various online transactions and research data processing. 

 

Phase 2 – Pre-Cloud Phase

This phase can be also called the Internet Phase. As in the 1960s, APRANET created the internet by connecting the four systems in different Geographical locations in America Like this Internet era started and so the new future of Cloud Technologies. 

Cluster Computing

Here each computer is connected through a network of high bandwidth. Thus the cost involved in the Mainframe is removed and it became the best alternative for it. Though the problem of the cost was solved, the geographical restrictions still prevailed as the internet was in the starting stage. 

Grid Computing

In the 1990s the Grid Computing technologies came into action. Here the different systems are located in different locations of the world and connected through the internet. And each system was owned by different organizations. It solved the main problem of Distance but new problems emerged. 

For example, as the distance between the nodes increased it needed high bandwidth but that was not possible in all cases. However, Grid Computing laid a strong foundation for today’s Cloud Computing. And many people call Cloud computing the Successor of Grid computing. 

Virtualization

It refers to the techniques of distinguishing hardware into different visual layers. This allows the users to run multiple instances simultaneously in the hardware. This was introduced 40 years back, which is now the major idea used by major cloud computing providers like Amazon, Google, etc… 

Phase 3 – Cloud Phase

After the integration of Grid Computing with Virtualization techniques we needed only better hardware resources and the internet. They were made available in the meantime thus we started Cloud developments. 

 

Web 2.0

It plays an important role in Cloud Computing. Popular innovations in Web 2.0 like Google Maps, Twitter, Facebook, and other social media needed high storage. And this leads to the development of cloud storage techniques. And it created a new study called client-server management. 

Service Orientation

The next thing is the introduction of SaaS. As many people started using personal computers and smartphones the scale of technologies and user scale rapidly increased in the 2000s. And this created a new business opportunity called Software as a Service shortly called SaaS. 

Cloud Computing

Then many SaaS companies started to provide cloud storage, infrastructure, and management service. It is termed utility computing. And then, today’s Cloud computing was brought into action. There are many types of cloud-like hybrid and simple models etc… 

 

Conclusion

No matter what we have just started. And there is a big future for Cloud computing with the introduction of machine learning and AI technologies. It is very hard to predict how this cloud computing is going to change the future. 

The introduction of 5G or the Internet of Things (IoT) can take evolution of cloud computing to the next level. However, we can hope to reap good benefits in the future. 

Continue Reading:

Mainframes vs Cloud Computing

Grid Computing vs Cloud Computing

 

]]>
https://networkinterview.com/evolution-of-cloud-computing/feed/ 0 15480
DevOps vs SRE (Site Reliability Engineer) https://networkinterview.com/devops-vs-sre-site-reliability-engineer/ https://networkinterview.com/devops-vs-sre-site-reliability-engineer/#respond Thu, 11 Mar 2021 14:07:46 +0000 https://networkinterview.com/?p=15192 Introduction: DevOps vs SRE

After Google introduced Site Reliability Engineer into the software development process, the whole IT industry was confused and had this question “What is the difference between the DevOps and SRE?”

In short, there is not much difference between the DevOps and SRE. Today in this article you will get to know how they both similar and different from each other. Let’s start with a small introduction to the DevOps and SRE. 

What is DevOps?

The word DevOps is expanded as the Development Operations. It refers to the framework followed by the IT companies to produce the software or applications as per the customer requirements. It involves agile and automotive infrastructure to ensure a fast return on demand. 

When the customer makes a request the DevOps team starts working on them and aim to make fast and quick delivery. It covers the whole Software Development Life Cycle (SDLC) from planning to final testing. 

They practice many automation techniques like machine learning and Artificial Intelligence to make create a continuous and qualified delivery. It is the direct successor of the Agile Software Development process. 

In Short, DevOps or Development operations refers to a framework adopted to reduce the barriers the traditional development operations.  

What is SRE?

SRE refers to the Site Reliability Engineering or Engineer. It is a discipline which is created by fusing the software engineering with administration, infrastructure and operation problem of the organization. 

An SRE is none other than, an administrative expert who has basic programming or software knowledge to create a solution for the operative issues and development process. He helps in achieving scalable and highly reliable software systems. 

It is not common to have a Site Reliability Engineer in organizations, only the big organizations or sites that host massive servers that process large data. An SRE shares the major principles of the DevOps. 

A Site Reliability Engineer spent nearly 50% of his time in the “operations” which is shortly called “Ops” in the IT field. He will be engaged in the works like on-call, monitoring, issues, manual supervision, and intervention. And he spends the other 50% on the development process.

Difference between DevOps and SRE:

It is hard to differ both of them as they are used side-by-side in many combinations. SRE and DevOps are similar by adopting these five major principles. 

  • Reducing organizational issues
  • Measuring everything
  • Implementation of gradual change and agile development
  • Accepting failure
  • Automation and innovation. 

However, there are some differences between them. 

Comparison Table: DevOps vs SRE

PARAMETER

DevOps

SRE

 

Abbreviation for

 

Development Operations Site Reliability Engineer/Engineering
Definition It is the framework adapted in an Organization to fast deliver the software or applications as per the customer needs. It combines the software engineering concepts with the operations of the organizations and acts as a bridge between Development and Operations team.
Scope of work

 

DevOps includes development, remodeling, and fast delivery of Applications SRE involves 50% of operations and automation works and 50% of development works.
 

Goal

 

Continuous and fast App Development To ensure the scalability, performance, and reliability of the software.
 

Focus

 

Focus on implementation of new automation tools and meeting the final customer requirement. SRE focus on inducing new methods and automation in DevOps functions.
 

Stage

 

DevOps is the first stage of the production process. It is part of DevOps that focuses more on the automation and performance of the software.
Types of Approaches

 

Simple DevOps and DevSecOps (Integration with Security Operations) SRE can be combined with many DevOps personals like Release engineer, production engineer, etc…
Dependency DevOps is not dependent on SRE. However, SRE helps to improve the DevOps performance. SRE is dependent on DevOps; the operation of SRE varies based on the existing DevOps operation.
Way of processing Comparatively less automated and manual intervention. 9 Automation monitored by the administrative expert.
Knowledge Requirements Wide knowledge of different Script languages and specialization anyone (preferably Python) A Site Reliability Engineer should excel both in administration and software engineering.
Salary estimate The annual salary of the DevOps Engineer varies from $91,666 to $155,000. Based on his experience. As a Site Reliability Engineer, you can expect an Annual Salary of $78,000 – $90,000.

Download the comparison table here.

In the end, it is all about data and software development. If you have any further questions or ideas please share them in the comment section below.

Continue Reading:

DevOps vs NetOps

Cloud Engineer vs DevOps Engineer

]]>
https://networkinterview.com/devops-vs-sre-site-reliability-engineer/feed/ 0 15192
Basics of SDN and Open Flow Network Architecture https://networkinterview.com/basics-of-sdn-and-open-flow-network-architecture/ https://networkinterview.com/basics-of-sdn-and-open-flow-network-architecture/#respond Tue, 14 Jul 2020 10:47:34 +0000 https://networkinterview.com/?p=14224 SDN and Open Flow Network Architecture

With colossal growth in Cloud computing and Intent based networking, SDN technology has clearly lead the race. Infact SDN has also catered to needs of overgrowing demand on IT for faster response to requests combined with more control over network. The first SDN communication protocol was Open Flow which was responsible for direct interaction between SDN Controller and forwarding plane of network devices like switches and routers etc. In this article, we will understand the concept of SDN and Open Flow Architecture –

Table of content:

  1. Introduction to SDN
  2. Open Flow Architecture
  3. Key Terms of SDN
  4. Benefits of SDN
  5. Conclusion

Introduction to SDN

SDN separates the network control plane and forwarding plane and provides a central view for more efficient implementation and running of the network services. SDN is controlled by software functions. SDN segregates the network control and forwarding functions.

Open Flow Architecture

OPEN FLOW is a Layer 2 Protocol bused on Southbound Interface. It manipulates Forwarding Plane devices. Open Flow performs separation of Control and Data planes. It also provides centralization of the control and offers Flow based control mechanism.  Open Flow supports three message types:

  • Controller to Switch: Sent by Controller to manage the switch.
  • Asynchronous: Sent by switch to inform controller.
  • Symmetric: Sent by controller and switch to diagnose the problem.

The Open Flow Network Architecture consists of three layers:

  • Application layer: Applications running on physical or virtual host.
  • Control layer: SDN Controller or Network Controller.
  • Forwarding layer: Physical and Virtual network device.

Related – Top 5 SDN Vendors

 

Key Terms of SDN Technology

 SDN Controller: SDN Controller is the Center of the SDN Architecture and the most important SDN Architecture Component. It is the brain of system and central control mechanism of SDN network that controls forwarding plane. The control of all the data plane devices is performed via SDN Controller. It controls Applications at Application Layer. It communicates and controls upper and lower layers with API (Application Programmable Interface). It allows SDN users to gain a centralized control in the entire network and network administrators instruct switches and routers how the forwarding plane should direct network traffic.

Forwarding Plane: This layer of SDN that is full of physical or virtual forwarding devices.

Control Plane: This layer of SDN includes Network Operating System and SDN Controller. The control plane sets up the packet processing rules and establishment of the whole network switching policy.

Data plane: The data plane forwards packets via the router. Decisions are taken by controller instead of on each individual router.

Application Plane: This is top layer in SDN architecture.

Southbound Interface: It lies between the controller and the data plane forwarding devices. Southbound APIs push information to network device. Open Flow is the first southbound API and is a heavily adapted protocol.

Northbound Interface: It lies between the controller and the application plane. Northbound APIs push information above to the applications and business logic, giving network administrators the ability to pragmatically shape traffic and launch services.

API: Application Programmable Interface. Provide systems and software interaction.

Open Flow: South bound Interface API that provides the communication of SDN Controller and SDN Data Plane devices.

Network Devices: Data Plane consists of various Network devices, both physical and Virtual. Data plane is responsible for forwarding of traffic. In the previous traditional networks, both control and data plane was in the same device. In SDN, network device only have the data plane. Incorporation of SDN setup offers a very efficient Forwarding mechanism.

Related – OpenFlow vs Netconf

Benefits of SDN

  • SDN provides a way to programmatically configure the switches at runtime, and to manage the network resources in a more efficient way.
  • It enables the network administrator to provide bandwidth “on demand”.
  • SDN is vendor-neutral technology. It is less hardware dependant. Software and protocols in use are open source.
  • SDN is becoming an important concept to deploy cloud services and to incorporate new paradigms such as BYOD (Bring Your Own Device), IoT into industrial networks.

Conclusion

Software defined does not mean that it only uses virtual switches instead of dedicated hardware switches to be programmed. The key feature of SDN is that the control plane is decoupled from the data plane.

]]>
https://networkinterview.com/basics-of-sdn-and-open-flow-network-architecture/feed/ 0 14224