This article presents recent statistics on enterprises' use of cloud computing services in the European Union (EU). 19 % of EU enterprises used cloud computing in 2014, mostly for hosting their e-mail systems and storing files in electronic form.
46 % of those firms used advanced cloud services relating to financial and accounting software applications, customer relationship management or to the use of computing power to run business applications.
Four out of ten enterprises (39 %) using the cloud reported the risk of a security breach as the main limiting factor in the use of cloud computing services.
A similar proportion (42 %) of those not using the cloud reported insufficient knowledge of cloud computing as the main factor that prevented them from using it. Essentially, instead of building their own IT infrastructure (which would include hardware and involve developing and maintaining software applications and databases), enterprises can access computing resources hosted by third parties on the internet (the ‘cloud’).
In technological terms, cloud computing is a model for providing enterprises with ubiquitous, flexible, on demand access over the internet to a shared pool of configurable computing resources, including servers, databases, software applications, storage capacity and computing power. In principle, the service providers may deliver ICT-related services from shared servers (public cloud) or from a cloud infrastructure provided for the exclusive use of a particular enterprise (private cloud). As cloud computing services can be delivered only via the internet, enterprises must have internet access to be able to use them.
Of the enterprises that reported using cloud computing, some 66 % relied on a cloud solution for their e-mail (see Table 1).
Most importantly, via the cloud, enterprises access relatively more advanced end customer software applications, e.g.
For this classification, all possible individual responses (in bold) are necessary conditions.
The latter, by definition, involve a single-tenant environment where the hardware, storage and network are set aside for a single enterprise. Enterprises using cloud computing services reported several factors limiting their usage (see Figure 6). The data in this article are based on the results of the 2014 survey on ICT usage and e-commerce in enterprises. The economic activities referred to are defined in the EU’s NACE classification, Revision 2. The data extracted for this article may differ from those in the Eurostat database where the latter have since been updated. The Digital Agenda for Europe (DAE) aims to reboot Europe’s economy and help its citizens and businesses get the most out of digital technologies. The wider EU policy interest is in enabling and facilitating the faster adoption of cloud computing across all sectors of the economy; this can cut ICT costs and, when combined with new digital business practices, boost productivity, growth and jobs. Cloud computing is one of the strategic digital technologies considered important enablers for productivity and better services.
Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
With the dramatic increase in the number of mobile devices connecting to the campus network, the task of securing, monitoring and managing access to the campus network becomes challenging. This guide shows how to design a BYOD solution with Brocade campus network products and Brocade partner Bradforda€™s Network Sentrya„? appliance. The Network Sentry NAC appliance provides automatic and efficient device on-boarding and monitoring of client traffic including mobile devices. Combined with Bradforda€™s Network Sentry network access control (NAC) appliance, Brocade offers cost-effective bring-your-own-device (BYOD) solutions for campus networks scaling from a small building to large metropolitan area configurations. Brocadea€™s ICX Series and FCX Series switches support stacking for improved performance and reliability at the edge. Brocadea€™s Mobility Series of access points and controllers centralize wired and wireless management; optimize the wireless data path with direct forwarding of data traffic between access points.
Integration of wired and wireless management is increasingly important as wireless device connectivity continues to grow. This design guide is based on Brocadea€™s Campus LAN Infrastructure: Base Reference Architecture. This document is intended for solution, network and IT architects who are evaluating and deploying BYOD solutions for their campus network. This design guide provides guidance and recommendations for an integrated BYOD solution with Bradford's Network Sentry and Brocadea€™s campus network products. BrocadeA® (NASDAQ: BRCD) networking solutions help the worlda€™s leading organizations transition smoothly to a world where applications and information reside anywhere. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility.To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings. Bradforda€™s powerful and innovative security solutions are developed by a staff with years of expertise in network security and brought to market based on invaluable input received from customers.
With solutions that dynamically adapt to changing network conditions and continually combat network threats, Bradford addresses the security needs of a wide variety of organizations in markets including education, financial services, state and local government, healthcare, energy, retail and many others. This design guide is based on Brocadea€™s Campus Network Infrastructure, Base Reference Architecture, (Campus Reference Architecture) as shown below. The Brocade HyperEdge Architecture is designed to easily support integration of partner solutions such as, Bradford Network Sentry NAC appliance. With the growth of smart phones and tablet computers and the growth of web-hosted computing platforms, many IT organizations are being pushed by users and management to deploy a flexible, secure BYOD solution in their campus network.
While users create a pull for BYOD solutions, there is also a push from the cost savings available if IT does not procure and maintain personal devices. Another key requirement of a BYOD solution is the increasing bandwidth use in the campus network. This guide helps network designers build a BYOD solution that meets the requirements for secure user mobility across wired and wireless networks, simplifies the network design, and delivers necessary network performance.
Brocade networking products are designed to deliver line-rate Layer 2-3 forwarding, and provide information about traffic flows through built-in hardware-based sFlow monitors. Network Sentry requires Layer 3 connectivity to the access layer switch and attached WAPs to potentially change a client from production VLAN to isolation VLAN, or to assign proper VLAN access based on a role. Brocade switches have a built-in secure SNMP server with Link Up, Link Down traps that are sent to the Network Sentry appliance NMS Trap receiver. Campus networks frequently are deployed using the traditional three-tier architecture: access, distribution, and core or backbone. For this reason, this guide includes designs that address the requirements to upgrade the campus network beyond just adding a NAC appliance. The answers to these questions are different for each organization and will likely change over time.
After user requirements are gathered, system level design requirements need to be identified.
Will the entire campus be managed under a single set of policies or will individual business units manage their needs independently?
During this phase, the Network Sentry appliance is integrated and provided with out of band management access and remote access for installation and setup of the appliance.
The Bradford Network Sentry NAC appliance can integrate with a wide range of campus network equipment and topologies. The following diagram shows Base Design templates derived from the Campus Reference Architecture.
The Bradford Network Sentry appliance lets the network administrator create NAC polices that apply to all wired or wireless devices that connect to the campus network.
Brocadea€™s HyperEdge architecture for campus networks is specifically crafted to cost-effectively meet these requirements. Network Sentry integrates with existing infrastructure and correlates network, security, endpoint device, and user information to provide total visibility and control over every user and device accessing the network. Network Sentrya€™s out-of-band architecture leverages the inherent security capabilities of existing network equipment along with authentication and authorization technologies such as 802.1X, RADIUS and Active Directory for identity management. The following diagram shows the out-of-band Network Sentry NAC architecture for wired and wireless devices. As shown by the double headed arrows, Network Sentry identifies and monitors both wired and wireless clients depending on the access point the device connects from.
The following diagram illustrates the operation of Network Sentry when wired devices connect to the network.
The following diagram shows operation of Network Sentry when wireless devices connect to the network.
Network Sentry manages the wireless controller which in-turn manages the WAPs that wireless devices connect to. As a wireless device transitions between different states and VLANs, such as isolation, authentication, production, guest, Network Sentry a€?Blacklista€™s all devices during the dissociation, association state. This section summarizes how to configure the Network Sentry appliance to integrate with an existing Brocade wireless network.
We’ve just updated the VMware TCO Comparison Calculator to help customers see the true Total Cost of Ownership differences between VMware and Microsoft.
We created the TCO Comparison Calculator after hearing from existing and prospective VMware customers who were being told that alternative solutions based on Hyper-V would be much less expensive, or even “free”. When all those cost elements are combined, the VMware TCO Comparison Calculator shows that VMware solutions, ranging from a small business virtual infrastructure built with vSphere Essentials to a full-featured large enterprise private cloud based on vCloud Suite Enterprise, have the lowest TCO – often by substantial margins. When we updated the calculator, we saw that the VMware TCO advantage increased for some important reasons. Microsoft’s adoption of core-based pricing in their upcoming release of Windows Server 2016 and System Center 2016 makes their solutions more expensive on mainstream servers that have higher core counts. This example from the VMware TCO Comparison Calculator shows that the 3-year TCO for a 500-VM environment built with vSphere with Operations Management Enterprise Plus will be 33% less than a comparable solution based on Microsoft Windows Server Hyper-V and System Center. Our customers in the trenches running enterprise virtual infrastructures often tell us they know VMware offers the best and most cost effective solution, but they need help making the case for selecting VMware with purchasing managers or CFOs that have heard from other vendors claiming to be less expensive. While the operating assumption is that the OpenStack framework works best on open source components such as KVM, a just completed study by Principled Technologies and commissioned by VMware showed otherwise.
In the study, OpenStack services were used to provision and manage the test configurations.
The emergence of hyper-converged architectures that can increase performance and lower costs associated with a virtualized infrastructure by having compute, network, and storage coexist closely on physical resources. VMware innovations are helping customers get enterprise-class performance when exploring the OpenStack framework as a platform for large-scale application deployment.
The use of direct-attached disks on the compute hosts brought proven benefits of shared storage in the VMware environment, such as High Availability (HA) and vMotion. Tight integration with the vSphere [hypervisor]; scaled easily by adding more hosts to a cluster or more storage to existing hosts. Every disk chosen for Virtual SAN storage belongs to a disk group with at least one solid-state drive that serves as a read and write cache. For the following tables, please refer to the full study for the complete test methodology and equipment setup.
Figure 1: The amount of YCSB (Yahoo Cloud Serving Benchmark) OPS achieved by the two solutions. As an enterprise customer, you have choices when it comes to implementing an OpenStack framework. This entry was posted in Current Affairs and tagged kvm, performance, red hat, test, vSphere on June 20, 2014 by Cameron Sturdevant.
Amazon recently launched a new version of their Total Cost of Ownerships (TCO) Calculator that compares VMware on-premises solutions to Amazon Web Services (AWS) offerings. Amazon claims their calculator provides an “apples-to-apples” comparison, but in reality, it doesn’t come close to doing so. Compares VMware vSphere Enterprise Plus, our most feature-rich edition, against AWS infrastructure.
In a separate VMware TCO comparison calculation for a 100 VM environment, VMware TCO is $394K compared to AWS’s $487K over a four-year period. 5 “High MEM Extra Large” MS SQL Servers (2 vCPU, 16 GB RAM, 400 provisioned [guaranteed] IOPS per VM, utilized about 12 hrs. For the AWS deployment, it also included Business support (24×7 phone support) and the VPC service, which provides a security perimeter for the VMs. It uses vSOM Standard for the infrastructure running on 5 ESXi hosts with 2 CPUs, 6 cores per CPU, 128 GB RAM per host and 6 NICs per host.
Note that for this sample environment, the calculations assumed licenses for vSphere with Operations Management (vSOM) Standard, which offer more features and functionality than that of AWS and contain the features a customer truly needs for this scale environment. Clearly the AWS TCO Calculator does not represent a fair, “apples-to-apples” portrayal of the costs of an on-premises solution. With the addition of vCloud Hybrid Service (vCHS), VMware now offers customers a public cloud option with faster time to value and the ability to add or reduce capacity dynamically through the use of hybrid, off-premises data centers. Edit: An earlier version of this post claimed that the VMware TCO was over a three-year time period. The release of VMware’s vSphere Data Protection 5.5 (VDP) seems to have caused a stir in the virtual backup industry. We’ll dive in to each of these a little bit to get to the truth about vSphere Data Protection. Some vendors claim they require no agents to do vSphere backups, even for application aware backups of Exchange, MS SQL, and SharePoint, whereas VDP Advanced does require agents for these applications. The fact of the matter is, the vast majority of VMs do not require agents because of the way our vSphere data protection APIs work. Call me crazy, but a runtime process injected on a VM via admin credentials to do indexing and other activities on behalf of another server is the very definition of an agent. And don’t forget: our VDP Advanced agents also run on physical servers so you can backup your entire Exchange, SQL, or SharePoint environment with VDP Advanced.
First things first, it really doesn’t matter which backup system you choose – your backup files are useless without the backup servers.
Even if you’re using our basic version of VDP, which is included with most versions of vSphere and which does not have built-in replication, keep in mind that everything you need to protect your backups – the backup files, database, everything! VDP Advanced includes highly efficient, secure backup data replication across any link at no additional cost. In contrast, VDP Advanced can utilize Changed Block Tracking to restore a VM directly on full production storage. We’ve designed VDP and VDP Advanced to offer a great value to our customers, who often struggle to setup a good backup system and cannot afford the high price of some of the enterprise backup solutions. As I said at the start, we’re very proud of the ecosystem of partners we’ve built around vSphere, even those we compete with at times. This entry was posted in Uncategorized and tagged competitive, vdp, vdp advanced, veeam on April 30, 2014 by Jim Armstrong. If you’ve had a chance to use the VMware TCO Comparison Calculator, you know that it factors in all the elements of a proper Total Cost of Ownership analysis to compare the true cost of building a virtual infrastructure on our vSphere and vSphere with Operations Management products to the cost of building a similar infrastructure on Microsoft’s “Cloud OS” – their name for Windows Server Hyper-V and System Center.
The results are eye-opening for many users who have seen the comparisons from our competitors that consider only the Windows operating system and virtualization software license costs. You can see the TCO of upgrading your vSphere infrastructure to a full-featured vCloud Suite private cloud. Better VM density – Being able to run more VMs per CPU has always been a vSphere strength due do its outstanding memory management and DRS load balancing technology. Richer feature set – vSphere with Operations Management and vCloud Suite provide more of the management, data protection and availability, networking and disaster recovery features that customers need. Much lower operational costs – Our customers that have tried competitors’ products tell us that running a vSphere and vCloud Suite infrastructure is much easier and more efficient. A quick example from the VMware TCO Comparison Calculator shows just how much of an impact those VMware cost savings have. You can see that VMware delivers 30% lower TCO from its lower OpEx costs and features that preclude the need for third-party add-ons. Here’s an example showing that the two-year TCO for upgrading a 1000-VM vSphere Enterprise environment to our full-featured vCloud Suite Enterprise platform comes in 36% less than if that same infrastructure were migrated to Microsoft’s “Cloud OS”. Whether you’re new to virtualization and considering a greenfield server consolidation project or a long-time vSphere user weighing your options for a private cloud upgrade, give the VMware TCO Comparison Calculator a try – you’ll see that you can get the best for less.
You can take a break from the hype cycle by checking out the rest of the blog post by Bogomil Balkansky, Sr.
This entry was posted in Current Affairs, vSphere and tagged cloud, management, vSphere on March 28, 2013 by Cameron Sturdevant. Finally, vSphere with Operations Management raises the bar by redefining what operations management needs to be in today’s dynamic infrastructure.  Cloud customers simply were not finding effective solutions from their traditional, legacy IT management frameworks, or even 3rd party tools that are built on the same premise. The idea of introducing multiple hypervisors into your data center and managing them seamlessly from a single tool might sound appealing, but in reality, products claiming that ability today can’t deliver on that promise.  You introduced virtual infrastructure to simplify operational tasks for your IT staff, so why would you want to handicap them with a management approach that adds costs and complexity?  A study recently completed by the Edison Group and commissioned by VMware shows that is exactly what you will be doing if you introduce Microsoft System Center 2012 Virtual Machine Manager (SCVMM) with the hopes of using it to manage VMware vSphere hosts. Microsoft touts SCVMM as a heterogeneous management tool with the ability to manage VMware vSphere and Citrix XenServer hosts in addition to those running Hyper-V.  IT managers might find Microsoft’s claims that they can, “easily and efficiently manage… applications and services across multiple hypervisors,” enticing.


Server virtualization and cloud computing are often talked about – even by service providers — as if the terms were interchangeable. Virtualization is a way to make more efficient use of today’s high-performance CPUs, by letting you run multiple servers on the same hardware. Virtualization lets you reduce the cost and complexity of your IT infrastructure by maximizing the utilization of your physical computing resources. The benefit of cloud computing is that the provider takes care of the infrastructure your application runs on. Q: What is a quick way to tell if a vendor is really talking about true cloud computing services or about virtualization? Cloud computing, in contrast, costs less upfront because you don’t have to buy and manage the infrastructure. Whether you choose virtualization or cloud computing, or maybe even both, the performance of your applications is paramount. To ensure application performance levels to your distributed users, you need to be able to efficiently manage and troubleshoot network performance. The Cloud Automation Platform is a product capable of provisioning, automating and managing Infrastructure as a Service (IaaS) private clouds.
Infrastructure Dashboard – Using the Infrastructure Dashboard, administrators can view graphic representations of and manage resource pools, virtualization hosts, physical computers, VMs, NAIL Servers, and other components of the Cloud Automation Platform environment.
Item Recovery – This feature alerts administrators to any items that are no longer accessible by Cloud Automation Platform, but are required for successful deployment and management of the environment.
SNMP Event Broadcasting – Cloud Automation Platform can be configured to send selected SNMP events to an SNMP server, which can then display the events in an SNMP console or otherwise broadcast them. Auto-Registration for Members of LDAP Domain- When defining the LDAP server properties, select Allow Auto-Registration to enable any member of a LDAP domain to log onto the CAP web interface using his or her LDAP credentials. New Reports – The new Chargeback reports provide a complete accounting of resource consumption by user. New Pause Mode for Maintenance Windows – Now when creating a Maintenance Window, administrators can select to impose a Pause Mode, which disallows any actions such as snapshots, promotions, or migrations by non-administrative users.
Library and File Cache Storage Changes – Added NFS support for ESX library and file cache locations and for Clustered Shared Volume (CSV) file cache locations. Ansible is one of the four main players in the automation market, younger then the well known Chef and Puppet, has been launched in 2013 in Durham, N.C. If we compare with the same quarter in 2015 earnings per share, from continuing operations, decreased 22%. DockerCon 2016 began yesterday in Seattle with a number of announcements from Docker and key partners. Yesterday, Bellevue (WA) based company WinDocks, released a free edition of its homonymous port of the Docker daemon to Windows called WinDocks Community Edition.
Containers’ security is one of the emerging topics in those companies moving this technology into production. Today OpenStack Foundation has released the 13th version of its IaaS platform for public, private and hybrid clouds. Yesterday Docker announced to have acquired a semi-stealth startup called Conductant, focused on workloads orchestration. Today Cisco announced the intent to acquire CliQr Technologies Inc., a privately held company based in San Jose, CA. Yesterday VMware announced version 7 of both its vCloud and vRealize suites, confirming its efforts to be relevant in the CMPs (Cloud Management Platforms) space. HP has enabled the SCVMM based StoreVirtual VSA deployment on Hyper-V hosts and automatic configuration of the management groups and clusters, fully utilizing the best practice recommendations from StoreVirtual VSA. When a hybrid small to midsize business (SMB) Disaster Recovery-as-a-Service (DRaaS) solution faces off against a cloud pure-play, which DR platform will emerge victorious?
Disaster Recovery (DR) is no longer an expensive "What if?" luxury only afforded to large enterprises. Installation and Configuration ASR can be installed either by installing Microsoft System Center Virtual Machine Manager (SCVMM) for configuring DR backup for multiple machines, or by creating a Recovery Vault on Azure, followed by a Hyper-V Site within the vault for a single-system use case. Quorum's browser-based management console lets users manage multiple appliances through a UI that looks more dated than Azure's but couldn't be easier to use. Backup: Quorum's first backup can take a few hours, at which point the system switches to taking incremental snapshots at regular intervals. Failback: ASR's failback capability, or what happens when on-site capabilities return after a disaster, was also not the smoothest for our reviewer. Note that this ViPR is not the same EMC Viper project from a few years ago that was focused on data footprint reduction (DFR) including dedupe.
In principle, cloud computing involves two components, a cloud infrastructure and software applications. In 2014, this applied to almost all EU enterprises (97 %) with 10 or more persons employed. In Finland, Iceland, Italy, Sweden and Denmark, over 30 % of enterprises used cloud computing. Instead of setting up a server infrastructure for their e-mail system, which would have involved inter alia capital expenditure and maintenance costs, these firms opted for a cloud solution based on per-user operating costs. Over half of all enterprises (53 %) used the cloud for storing files in electronic form.
For example, enterprises classified in the ‘lower-medium’ level will have reported using at least one of the services in (a), (b) or (c), but none of the others. Consequently, the infrastructure guarantees high levels of security, as the service provider’s other clients cannot access the same resources. The use of cloud computing services may require specific ICT management skills, particularly to evaluate need and use management tools to gauge consumption of IT resources accurately. Service providers may use data centres scattered around the globe hence enterprises using the cloud may feel uncertain of the location of their data. In most sectors, enterprises reported that insufficient knowledge of cloud computing prevented them from doing so.
The statistics were obtained from enterprise surveys conducted by national statistical authorities in 2014. The sectors covered are manufacturing, electricity, gas and steam, water supply, construction, wholesale and retail trades, repair of motor vehicles and motorcycles, transportation and storage, accommodation and food service activities, information and communication, real estate, professional, scientific and technical activities, administrative and support activities, and the repair of computers and communication equipment. It is the first of seven flagship initiatives under Europe 2020, the EU’s strategy to deliver smart, sustainable and inclusive growth. Enterprises use cloud computing to optimise resource utilisation and build business models and market strategies that will enable them to grow, innovate and become more competitive. A new trend, Bring-Your-Own-Device (BYOD), has grown popular but multiples the security challenges. This ensures uniform NAC policies for wired and wireless connections improving security without undue burden on the network administrator. At the campus core or distribution layer, options include the ICX 6610 stack, the SX chassis and the larger MLX chassis.
Brocade Mobility Controllers can be clustered for high-availability and can scale up to as many as 1,024 access points per controller. And, integrated solutions with partner applications for NAC appliances, such as Bradford simplify network security for wired and wireless devices.
It describes how to design a Bring-Your-Own-Device (BYOD) solution using Network Sentry appliance from Bradford, a Brocade partner.
In addition, any Brocade release notes that have been published for the FastIron, NetIron and Mobility operating systems should be reviewed.
This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Since the companya€™s founding in 1999, hundreds of customers and millions of users have come to rely on our technology to secure critical IT assets and automate IT security operations. Bradforda€™s innovative, award-winning products and solutions are widely recognized by industry analysts including Forrester and Gartner, as well as leading publications including SC Magazine, CRN, and others.
The Campus Reference Architecture provides a rich set of flexible, wired and wireless building blocks providing cost-effective scalability for a wide range of campus environments. The following sections review the business requirements, design requirements and special considerations for a successful BYOD solution using the Bradford Network Sentry NAC appliance.
With a proven BYOD solution IT not only satisfies user demand for their personal devices while at work, but eliminates the overhead of user device acquisition, support and maintenance.
Users expect their personal device of choice to access all authorized resources needed for their job.
This enables real-time delivery and analysis of network traffic to a variety of security, reporting, and compliance devices. The isolation VLAN(s) can terminate at a distribution layer switch and route back to Network Sentry appliance that provides network services including DHCP, DNS and RADIUS.
If the switch also uses SNMP for management and monitoring by tools such as Brocade Network Advisor (BNA) or other management applications, additional SNMP hosts are configured. Many existing campus networks were built with technology that was new a decade or more ago.
In order to help off-set the costs of upgrading access switches, Brocade ICX switches and the HyperEdge architecture can eliminate the distribution tier reducing equipment, maintenance and operating cost that can greatly offset the cost of network upgrades.
Group policies and authentication rules are implemented after a careful study of the business needs. This helps ensure that devices will have adequate bandwidth on the wireless and wired network devices. A successful BYOD solution has to consistently and correctly secure device access, but it also needs flexibility to accommodate more users, provide fine-grained access policies and flexible user assignment to policy groups.
Does the network support this in selected wireless segments or across the entire campus network?
Network infrastructure requirements for wired and wireless access should also be configured.
Test cases with users can be defined for each user group and policy to verify network configuration and Network Sentry NAC appliance operation. Each organization has its own IT procedures for validation of new solutions before enabling them on a production network.
With the introduction of the latest Brocade ICX Series of switches, mixed stacks, high performance PoE+ ports, unified wired and wireless management and a centralized WLAN controller cluster simplify how customers secure a BYOD environment.
Network Sentry solution can be deployed either as a dedicated hardware appliance, or as a virtual appliance a€?in the clouda€™ to easily adapt to any network environment.
A directly connected client is automatically identified by Network Sentry using SNMP based Link Up, Link Down traps enabled on all access or edge switches.
Integration of Network Sentry with existing wireless infrastructure, including WLAN controllers and WAPs, requires configuration of the WLAN Controller and WAP as well as the Network Sentry appliance.
For additional information on Brocade Mobility Controller and Brocade Mobility Access Point configuration, please refer to the product specific Brocade configuration guides listed in the References. As needed, multiple WLANs can be defined but only a selective few can be managed by Network Sentry for NAC enforcement. This corresponds to assigning clients to different states that include, but are not limited to, default (production), isolation (registration, quarantine, and authentication) or guest. Each WLAN on the controller is configured with the authentication type along with the associated parameters. In the case of 802.1x, an external RADIUS server is required and additional parameters such as EAP and encryption type are also required.
Map Wireless Radios to WLAN and AAA Policy: After defining a WLAN with all the mandatory and optional attributes, it is important to map the WLAN to each radio of the WAP. The Network Sentry appliance is configured and managed through a web-based GUI accessible from any browser. Depending on the type,  specify either Layer 2 VLAN isolation networks or Layer 3 IP routed isolation networks and the corresponding client DHCP IP address ranges. Topology Discovery and Device Modeling: BYOD policy enforcement requires every network access point to be reachable, monitored and managed in real time. It’s easy to use – just enter the basic parameters for your virtual infrastructure or private cloud environment, such as the number of VMs, type of servers and storage, and the product edition or features you need.
The calculator assumes both VMware and Microsoft hosts are licensed for Windows Server Datacenter edition, so its core-based pricing penalizes customers of both vendors, but the lower VM density of Hyper-V means more Windows Server licenses are needed for a Microsoft platform.
Users can select USD, AUD, EUR, GBP, or JPY and the calculator will apply VMware and Microsoft list prices from those geographies. If you find yourself in a similar position, use the VMware TCO Comparison Calculator to arm yourself with solid proof that VMware provides the lowest total costs. Among these innovations, the study showed that VMware Virtual SAN played an important role in providing performance advantages. In addition, VMware Virtual SAN can be managed directly through the familiar vCenter Server™ Web client console, alongside everything else in a VMware vSphere environment.
Additional storage or hosts added to the capacity and performance of a VMware Virtual SAN data store without disruption. Your selections will impact the performance and overall cost of your scale out infrastructure. Our many customers choose us as their infrastructure platform and stay with us because we provide the best value. Their calculator contains biased assumptions regarding VMware’s TCO, which inflate the costs of an on-premises cloud and underestimate the true costs of using a public cloud solution.
Cost assumptions include the purchase of entirely new data center space, racks, networking switches, spare parts, etc., which would not necessarily apply to a customer making an incremental investment in their IT infrastructure. The calculator assumes a cost of $7,851 for a server with 2, 6-core CPUs and 96GB of RAM (including support).
The calculator includes the cost of VMware’s highly regarded Support and Subscription service for one year whereas no costs for AWS support are included.
At the end of the three-year time horizon, the VMware user owns their infrastructure and VMware software licenses.
There are also additional AWS fees for things such as: data transfer, IP addresses, service monitoring, CloudWatch, etc. The biggest difference between VDP and Veeam’s agent approach is that VDP’s agents are a one-time install via wizard, whereas Veeam’s agents are installed and uninstalled each and every time a backup job runs. Further, if you’ve lost your backup infrastructure I’d say the odds are good you’ve lost other critical parts of your infrastructure as well. How do we do it and why don’t you see some special “WAN accelerator” configuration inside VDP Advanced? Strategies for restoring data quickly is a topic I’d like to explore further in a more detailed article so we can look at how we’d approach some common scenarios with VDP. This means only the blocks that have changed since the selected restore point will be restored. While we at VMware focus on building products that are “better together” we realize that no single product will fit every customers’ needs and at the end of the day it’s you – the customer – who has to navigate the maze of features and jargon and figure out the solution that’s best for you. Including all the TCO elements shown above makes it very clear that the cost of virtualization software is just a small part of the overall TCO for a virtualized infrastructure. VSAN capital costs are significantly less than other storage options like Fibre Channel, iSCSI or NAS. This option compares the cost of upgrading to vCloud Suite with the cost of migrating to a Microsoft Windows Server Hyper-V and System Center private cloud. Without those features, Microsoft customers must purchase, integrate and administer multiple third-party products to fill the gaps, driving up costs.
Third party studies have quantified VMware OpEx cost as much as 80-90% lower than Microsoft and recent studies with the latest product versions show a similar advantage. This example shows the two-year TCO for an infrastructure of 1,000 VMs on vSphere with Operations Management Enterprise Plus (our highest edition) vs. The suggestion by Microsoft is clear: don’t worry about complicating the jobs of your system administrators by introducing Hyper-V into a VMware environment because SCVMM provides a do-everything single-pane-of-glass control panel.
This is, in part, because both were developed to solve a similar business problem: more computing with less resources.
One or more virtual – guest servers share computing resources under the control of a hypervisor. In the cloud, applications generally run on virtual servers that are independent of the underlying hardware.
It is configured in both hardware and software to provide high reliability and availability. Factors like bandwidth, latency, packet loss and jitter can play havoc on both virtualized and cloud-based applications. Now Quest has released version 7.5 of VAP, rebranded as Cloud Automation Platform and integrated with another orchestation product which Quest acquired after buying Vizioncore called vControl.


It can work with virtual infrastructures from Microsoft and VMWare or leverage physical provisioning and automation solutions from providers like HP and Symantec. Instead, a copy of the base image is made at deployment time, and changes made during the session are written to that copy, instead of to a redo file. At the core of this concept is that the reporting system retains a record of each individual’s consumption over a user-specified time period.
The latest version  of HP StoreFront Manager for Microsoft System Center is now available. The rise of Disaster Recovery-as-a-Service (DRaaS) solutions has made planning for the worst easier and more affordable than ever for small to midsize businesses (SMBs). Quorum is a traditional DR provider offering a physical storage and backup device, which they've now rolled into a hybrid product called Quorum onQ Hybrid Cloud Solution, which combines the physical onQ-T20 appliance synced with a cloud-based virtual appliance. For $750 per month, you get the low-end onQ-T20 appliance deployed on-site, along with full DRaaS support, local DR, a scalable architecture, deduplication, 1 TB of cloud-based storage, and one-click recovery. Businesses can start by configuring the on-site appliance and adding more cloud-based services as needed. The UI also includes a Quick Start page for configuring one-click failover testing, though you need both an active Azure storage account and a virtual network up and running. Broken up into the main dashboard with tabs for protection configurations, self-testing, restore, and appliance configuration, the console uses simple bar and pie charts for status, backup, and failure information, with plenty of support documentation. Adminstrators (admins) can also set the number of days to retain unreferenced backup data, and what drives, VMs, and CPUs to back up, each of which is assigned an automated amount of memory.
Meanwhile, Quorum recovery is nearly instantaneous, though there is a bit more latency with the DRaaS cloud backup as opposed to the physical site recovery—and with a far less complex process for moving from backup to a live state and the benefit of a local on-site appliance storing all of your backup data. Unplanned failovers started up with the most recently synced data quickly, but full data synchronization took 39 minutes on top of 37 minutes for failover execution. ViPR has been in the works for a couple of years taking a step back rethinking how storage is can be used going forward.ViPR is not a technology creation developed in a vacuum instead includes customer feedback, wants and needs. The first consists of the hardware resources required to support the cloud services being provided and typically includes server, storage and network components. Consequently, enterprises can use the services by accessing the internet using devices ranging from relatively low-cost desktop computers (‘thin clients’) to any number of various portable devices. Although the proportion of firms with internet access was at similar near saturation levels in most Member States, only one in five (19 %) reported that they used cloud computing services (see Figure 1). On the other hand, fewer than 10 % did so in Hungary, Bulgaria, Greece, Poland, Latvia and Romania.
In addition, 17 % reported using the (usually high-performance) cloud computing platforms for computing power in order to run their own business software applications. Those classified in the ‘upper-medium’ level will, in addition, have reported using the cloud for (d), but none of the relatively advanced services in (e), (f) and (g).
Clearly, firms attach importance to the protection of their IT systems, but the issue can be seen in the wider context of resilience to possible security breaches when using the cloud.
In addition, there may be issues of legal jurisdiction in the event of dispute and uncertainty about the applicable law. Expertise and sufficient knowledge of contractual and legal aspects and the details of technical implementation are necessary prerequisites to an enterprise deciding to purchase cloud computing services (this also applies to firms already using the cloud – see above).
Enterprises are broken down by size, into small (10-49 persons employed), medium (50-249) and large (250 or more).
Growth remains a condition for businesses’ survival and innovation remains necessary for competitiveness. On one hand, campus networks that support BYOD provide the flexibility for anyone to use any client device (wired or wireless), but the assumption is the network infrastructure can intelligently secure traffic by identifying, authenticating and administering network access control (NAC) with minimal administrator intervention. The ICX series offers Brocadea€™s innovative mix-and-match stacking extending Layer 3 services on a few switches to all switches in the stack. University students expect the campus network will support the latest portable devices they use; failure to meet that demand can adversely affect recruitment. By shifting equipment ownership to the user, IT budgets are focused on the service delivery rather than device support.
BYOD solutions must secure wireless access via a range of devices, tablet, smartphone, and laptop, at all wireless WLAN Access Points (WAP) in the network.
After the initial identification and authentication, all device traffic is re-directed to the appropriate production network service or blocked if the device is not authorized to connect. Consequently, choke points and bandwidth limitations can be exposed by BYOD projects that require 1 GbE connectivity at the edge and PoE+ power for high-bandwidth 802.11n capable WAPs. Next, a complete audit of use cases with expected outcomes should be conducted during the design phase.
Brocade provides Validation Testing publications for selected features and technologies that may prove helpful when defining what types of testing to conduct.
As more mobile devices connect to the network and more powerful devices come to market every 18 months, a HyperEdge network provides scalable bandwidth with low latency and low over-subscription. A client that connects to the network via a VoIP Phone is identified and monitored using a combination of 802.1x and RADIUS authentication. This should be followed by wireless client connection validation to ensure correct end-to-end BYOD policy enforcement. Configuration of the Brocade network for wired devices requires SNMP traps to be directed to the Network Sentry SNMP server.
Each WLAN representing a wireless network is defined with these states and VLANs in addition to enabling a€?Allow RADIUS overridea€™ option for RADIUS based VLAN assignment. This ensures every wireless device connecting via the WAP radios goes through the NAC enforcement process configured for the Network Sentry appliance. It is recommended that the Network Sentry appliance use the Layer 3 Network Type to simplify future network expansion.
The initial topology discovery process ensures IP and SNMP reachability to all devices (switches, routers, WLAN controllers and WAP) in addition to creating a network topology database.
The calculator will generate a complete TCO analysis that includes all the necessary elements of capital and operational expenses. Also, System Center is needed to manage Hyper-V and its higher costs with core-based pricing fall entirely on the Microsoft side of the TCO comparison. With this study, VMware has demonstrated significant performance gains and cost savings in an OpenStack environment.
The Amazon calculator tries to create a different perception by using biased and inaccurate assumptions. As a result, AWS customers often find they have to re-architect their applications in order to work around these missing capabilities. The AWS TCO Calculator truncates the comparison at the three-year mark, yet operating VMware on your on-premises data center can lower your TCO over the long-term.  It is inaccurate of Amazon’s to assume that the value of the entire private cloud investment vanishes after three years. It compares costs associated with running conventional workloads on AWS and VMware infrastructure. The costs of AWS instances are not the only factor to consider when choosing where to host workloads. While we’re certainly proud of the technology partner ecosystem built around VMware solutions I would like to take this opportunity to set the record straight on vSphere Data Protection. But, a proper application consistent backup of Exchange, MS SQL, SharePoint and other application does require an agent, even for vendors like Veeam.
First and foremost I’d recommend you use a product that includes backup replication so you always have 2nd and 3rd copies of your backups, hopefully on-site and off-site. At least not until you re-install the backup server and database and maybe some proxies and repositories so that you can actually use those files, stealing precious minutes or hours from your recovery time objective.
VDP Advanced is based on EMC Avamar and uses the same enterprise-class deduplication algorithm and replication engine as Avamar. For now I want to say this about “instant” recovery: the feature looks good in the brochure, but instant recovery techniques from nearly every vendor end up with VMs that are pinned to a single host, running from your backup storage, with IO shuttled through some sort of proxy VM.
As a result, restore times can be dramatically reduced – up to 6X versus traditional restore methods according to the VDP Advanced study performed by ESG Labs. The OpEx savings from VMware’s greater administrative efficiency are built into the TCO Comparison Calculator. Are their claims true?  Can Microsoft SCVMM really let you operate a multi-hypervisor data center without the cost penalties that come with staffing, training for, and operating across the isolated islands of management that would otherwise exist? More servers on a machine reduces the need for physical servers, which reduces hardware, space and power costs.
Clouds are also very flexible and scalable, in the sense that an application can simply consume resources as needed.
Multiple virtual servers all share a single network connection which requires optimal performance.
But it is still necessary to purchase and provision hardware and software upfront in order to run an application on virtualized infrastructure. Ultimately, cloud computing might cost more than running virtual servers on your own hardware, depending on your expertise and many other factors. In a cloud computing environment the service provider handles those concerns, for better or worse. Whichever path you choose, the ability to ensure network performance is a prerequisite for acceptable application performance.
Using debug mode sets the deployment automation to allow deployment even if there are VM (or physical server) start failures and Quest CAP Agent check-in failures. New resource utilization reports display data about resource utilization over a user-specified time period.
The biggest question now is whether a completely cloud-based DRaaS platform or a hybrid DR solution with on-site hardware makes more sense for your business.
Microsoft Azure Site Recovery (ASR), on the other hand, is a completely cloud-based DRaaS solution with backup, DR, and reporting features for both physical and virtual machines (VMs) across Linux and Windows workloads. All you need to do to set it up is identify the appropriate appliance from an installation wizard and add it to the existing network. Add that to a gallery of Azure automation and custom Windows PowerShell runbooks, and the ASR UI is defined most readily by simple navigation and its wealth of help documentation.
Similar to the installation process, Quorum is best for an SMB user who doesn't want any hassle and doesn't need advanced options. Our reviewer found ASR's failure-state performance middling, though it is initiated with one click after configuration. The second component refers to software applications and computing power for running business applications, as provided via the internet by third parties. Enterprises classified in the ‘high’ level will have responded in the affirmative for at least one of the services in (e), (f) or (g).
Service providers would be expected to take all possible steps to establish, and transparently apply, procedures relating to possible breaches of the security of systems and services intended for their clients. Both factors were reported as limiting the use of cloud computing, particularly for large enterprises already using the cloud (46 % for both). In addition, the risk of a security breach was again a key consideration for enterprises in four economic sectors (see Figure 7).
The European Commission’s main innovation policy is the Broad-based innovation strategy for the EU.
This assumption doesna€™t always hold up unless the campus network design explicitly includes a BYOD use case. Both the ICX and FCX Series provided long distance stacking links permitting a single stack to extend beyond a single wiring closet. Any interface on the controller that is used to manage a WAP should be configured to trunk the different VLANs including the default or native VLAN. Device modeling allows Network Sentry to interact with devices for dynamic monitoring and identification of new and existing device connections and to manage device VLAN assignments which change based on the state of the device (Default, Isolation, Guest, and Employee). Because each OpenStack deployment and environment is different and support engagements vary widely from installation to installation, the costs of implementing the OpenStack framework were not included for either the VMware or the Red Hat platform.
Designing for AWS requires developer teams to significantly redesign their applications to account for the limitations and the quality of AWS infrastructure. You might want a disaster recovery solution like our Site Recovery Manager or vCloud Hybrid Service – Disaster Recovery for this situation. With VDP Advanced your backups could be replicated directly to another VDP Advanced virtual appliance so you could immediately restore from the 2nd appliance – no additional configuration or setup needed. What this means to you is VDP does all the required deduplication as soon as the backups are created, across all backups stored on the appliance.
Add it all up and you’re left with a significant performance and usability hit to the recovered VMs. The core difference is management of the hardware; virtualization requires internal management whereas cloud services are managed by the provider of the WAN.
Virtual servers can also be moved across physical systems to further align available resources with demand. New quota utilization reports compare utilization levels with assigned quota levels per organization. The key highlight of this release is the enablement of Software Defined Storage (SDS) footprint in Hyper-V environment through SCVMM. The onQ appliance comes in a few different configurations, but they're all installed and configured by using the same basic steps.
Quorum has comprehensive backup, but ASR's virtual mapping is the more scalable philosophy as the data and virtual infrastructure small businesses run grows more complex over time. The default Azure replication setting is set to 15 minutes, but for small workloads involving only a few VMs, the time to spin up in the Azure environment took only five minutes. Therefore, from the firms’ point of view (regardless of size), the risk of a security breach may be a matter of service providers’ liability and accountability, as well as a merely technical issue. A similar proportion of SMEs (32 %) already on the cloud regarded the high cost of cloud computing services as a limiting factor. Today, a desktop computer is commonly a laptop computer with a WiFi card so users expect to undock and move their device anywhere they need to use it. With VMware, you have access to cost-effective, highly automated, secure infrastructure with a level of control and quality that provides superior value to IT and business units. If you later decide to move that VM from backup storage to production, it often requires multiple steps to move and rehydrate the VMDKs and then rebuild them from the delta disks that were written while the “instant” VM ran. Results of the backup verification jobs are reported in the VDP Advanced user interface and email reports so that administrators have the utmost confidence that important VMs can definitely be restored when needed.
It is possible to lose data, as well as access to business-critical services, when cloud services fail. The solution radically simplifies the transformation of existing server capacity into virtual storage through the new integrated HP StoreVirtual deployment wizard for Hyper-V. A $9,000 annual overhead cost for preemptive backup and recovery would even give pause to an SMB owner with a Scarface-esque pile of cash on his or her desk to spend. Even when you add all of those factors up, you end up with a roughly $80-per-month cost to back up each VM or physical machine, up to 1 TB using locally redundant storage at one Azure data center. After identifying the appliance, you find it in the browser-based Quorum dashboard and click the "Protect Me" icon to automatically launch backup scripts and create a recovery node (RN). Without these capabilities, AWS lacks an effective noisy neighbor solution, forcing customers to seek other ways to manage their applications and performance. Plus you get the added benefit of using less storage for the primary backups so you save money on your overall backup solution! This is all cheaper if you're already using Microsoft's infrastructure, but even with that added overhead, ASR is significantly cheaper. Installing ASR isn't prohibitively difficult, but Quorum's on-site hardware and wizard-based hand-holding gets the DR solution up and running with far fewer manual steps. Azure is perfectly competent at handling typical DRaaS use cases, but Quorum's hybrid model makes it the more reliable solution. These missing features are all part of the hidden costs customers encounter when they switch to AWS and one of the key reasons they delay or cancel AWS migration projects.
Storage hypervisors were a 2012 popular buzzword bingo topic with plenty of industry adoption and some customer deployment.
While 2012 saw plenty of SDM buzz including SDC, SDN 2013 is already seeing an increase including software defined servers, and software defined storage.Regardless of what you view of software defined storage, storage hypervisor, storage virtualization and virtual storage is, the primary focus and goal should be addressing business and application needs. By Greg Schulz on Jan 27, 20159 Trackback(s) May 6, 2013: EMC tells their SDS story, but is it really theirs alone? Powered by DisqusPrivacy and Disclosure Events Calendar Visit Server StorageIO's profile on Pinterest.



Creative cloud photography 2 computers
Cloud is a growth engine for business. this means that jazz
Linux cloud storage solutions
Affiliate window android app gratis


Comments

  1. 29.09.2015 at 18:12:37


    Objective of online backup the data is easily accessed and cloud storage vs physical storage devices can was able to deliver massive, high.

    Author: ADORE_MY_LIFE
  2. 29.09.2015 at 17:14:10


    Server for workloads computing power to on-premise subscription Economy by building.

    Author: ARAGON
  3. 29.09.2015 at 21:21:31


    Compare its services to other providers iCloud provides 5GB of free storage, both their files, on their supported.

    Author: Lady_Neftchi