Next-generation storage for the software-defined datacenter

Next-generation storage for the software-defined datacenter

This is a post from Siddhartha Roy, Group PM Manager, High Availability and Storage and Paul Luber, Group PM Manager, Storage and File Systems

Storage is a foundational component of the datacenter fabric and is an intrinsic part of Microsoft’s software-defined datacenter solution.  Our storage investments are centered on bringing value to customers in terms of increasing cloud scale, availability, performance, and reliability, while lowering acquisition and operational costs – with Windows Server, and now also with Microsoft Azure Stack.  

Storage Choice

Customers can have more than one type of storage, based on their need. We have storage choices across the entire cloud spectrum. Microsoft has a rich ecosystem of Cloud OS Network partners who offer a spectrum of cloud solutions to customers. Specifically, we focus on the following scenarios to enable customer choice:

  • Private Cloud with Microsoft Software Defined Storage – more on this later
  • Hybrid Cloud with StorSimple – extend storage beyond the datacenter by tiering cold data to Azure
  • Microsoft Azure Public Cloud –  get instantaneous access to large scale storage on demand
  • Private Cloud with SAN and NAS storage from our rich partner ecosystem

Let’s explore one of the storage categories – Private Cloud with Software Defined Storage

What is Software Defined Storage or SDS ?

The storage industry is going through interesting shifts driven by various factors – large-scale cloud services influencing design points and enabling the use of standard volume hardware by putting more intelligence into storage software. Virtualization is driving the need for mobility and density – containers will push that envelope further. Large levels of scale out ensure that “pay-as-you grow” models are seamless, elastic and fluid. 

Simply put – SDS is cloud scale storage and cost economics on standard volume hardware.

Our SDS journey

Our SDS solution was initially released in Windows Server 2012, bringing our public cloud experience to private clouds, and we have been continually evolving since then. Storage Spaces and Scale Out File Server provide a private cloud storage solution on standard volume hardware. We’ve provided storage efficiency capabilities like tiering and de-duplication while maintaining the principles of ease-of-management and operational simplicity. We delivered scale out storage that is load balanced with high performance enabled by SMB Direct. Features like SMB Multichannel, Storage Spaces provide resiliency to storage path, disk failures and more.  ReFS file system enables detection and correction of latent bit rot. And much more.

Partners – We vet these Windows Server 2012 R2 SDS private cloud scenarios at scale with our private cloud integrated hardware and software solution named Cloud Platform System.  We also provide Server 2012 R2 shared JBOD storage solutions from our partners DataOn, Dell, HP, RAID Inc. Storage Spaces solutions using shared SAS JBODs will continue to be supported.

Customers – Here are some customer case studies with Storage Spaces, Scale-Out File Servers with SMB3 or both.

Azure – Microsoft Azure public cloud storage is shaping and inspiring our private cloud storage journey for service providers and enterprises.  Microsoft Azure is a huge listening system for our private cloud storage scenarios. Alongside customer feedback, we are bringing Azure cloud scale and cost economics design points to our private cloud customers.  

Storage innovation for the software-defined datacenter

Now let’s fast forward to Windows Server 2016 and the newly announced Azure Stack. We are continuing the journey to embed more Azure design points in private cloud storage – thereby lowering costs further; not just acquisition costs, but operating costs as well. Let us highlight a few scenarios – unless mentioned otherwise, you can test these scenarios in Windows Server 2016 Technical Preview 2.

Storage QoS for more control and performance – We began the Storage QoS journey in 2012 R2. With Technical Preview 2, customers can scale storage QoS across unified storage and Hyper-V clusters. We are providing flexible ways for IT Pros to define min and max IOPs for storage resources (could be a VHD on a VM, a single VM, or a group of VMs in a service). You can put IO caps on rogue VMs and enable IO min, max boundaries on VMs or specific VHDs. For example, a Log VHD in your SQL Server VM may need to have a higher IOPs bar.  Storage QoS delivers IO traffic shaping to the datacenter.

Storage Replica for protecting mission critical data and workloads.  Storage Replica enables synchronous volume replication (with zero data loss) at the volume level. Not just Storage Spaces volumes but any SAN or NAS volume attached to the Windows Server host.  You can deploy Storage Replica in multiple modes – stretch cluster across two sites with their own shared storage, between two clusters, or between two standalone servers. With Storage Replica, customers can now have mission critical business continuance and disaster recovery at affordable price points.

Azure consistent object storage enables Azure Object storage (Tables and Blobs) in the private cloud. If a developer writes a cloud born application to Azure storage APIs, the same application can run in the private cloud with minimal changes.  This scenario will be available in a future preview.

VM Storage resiliency for protecting virtual machines from underlying transient storage failures.  VM storage resiliency will monitor the state of storage and gracefully pause VMs and then resume them when storage is available again.  With the VMs gracefully responding to the state of storage, it will reduce impact and increase availability of workloads running in virtual machines when storage is unreliable.

Deduplication – we have redesigned the optimization processing to be fully parallelized for individual volumes, allowing for simplified deployment using up to 64TB volumes. In addition, performance improvements allow usage of up to 1TB files without restrictions. The use of dedup to save storage space with virtualized backup solutions has been greatly simplified with the addition of the new “Backup” usage type.

Rolling upgrades empowers you to upgrade your Storage clusters (and Hyper-V compute clusters) seamlessly to the latest OS versions without incurring any downtime.

Storage Spaces Direct – we are introducing support for shared nothing storage, which enables more device choice, lowers costs, and increases scale. You can now use Storage Spaces to build private cloud solutions using the internal storage in standard servers, increasing density and rack efficiency. Storage Spaces Direct removes the need for a shared SAS fabric, simplifying configuration and management, and instead uses the network for storage traffic. With Storage Spaces Direct, you can get a large virtualized pool of reliable storage across disks on multiple servers connected by high speed, low cost storage networking. Our investments in RDMA and SMB Direct enable highly available clusters connected by high speed, low cost networks. To scale-out, simply add more servers to increase capacity or IO performance – no SAS cables required.

Storage Spaces Direct has two deployment options – converged (disaggregated) with storage and compute in different server tiers for independent scaling, or hyper-converged with storage and compute co-located on the same set of servers. Converged mode is available for evaluation in Technical Preview 2. We will offer hyper-converged mode for evaluation in subsequent previews.

We are also announcing our preliminary development partners who will offer solutions with Storage Spaces Direct.

 System Center and PowerShell for managing Microsoft SDS, SAN, NAS devices and FC fabric switches in a private cloud. At the heart of management is the Storage Management API (SMAPI), a common management interface used by System Center. In Virtual Machine Manager (VMM), administrators can provision storage, classify storage, and present storage to Hyper-V hosts, clusters, and VM guests. Operations Manager (OM) can monitor storage devices managed by VMM.  

We are announcing System Center support for Spaces Direct storage. System Center can bare-metal provision a Scale-out File Server configured with Spaces Direct, manage storage pools, create scale-out file shares, deploy virtual machines using rapid provisioning, manage Storage QoS policies, update and service cluster nodes, monitor operational and health status of storage, alerting with actionable alerts.

Wrapping up and useful links

We are offering customers choice for their private cloud storage needs – our rich ecosystem of SAN, NAS partner storage or Microsoft Software Defined Storage.

Here are a few links to deployment and experience guides – Storage QoS, Storage ReplicaRolling Upgrades, and Storage Spaces Direct.  

Please join us at the Ignite conference or visit back here to hear much more. Try our scenarios, we would love to hear your feedback in the Windows Server Preview discussion forum or our new Windows Server – Storage user voice.

The post Next-generation storage for the software-defined datacenter appeared first on .



Source: onliveserver

Microsoft Mechanics to bring you the latest on new server options

As technologists, it can be hard to keep up whether it’s to do with understanding new server options or the growing ubiquity of technology approaches for resource management and automation. Beyond the news and our blog announcements, it makes sense that you may want to see the technology for yourself, get broader context and understand the potential application without having to dedicate hours of your time to learn more.

Today marks the launch of Microsoft Mechanics, (www.microsoft.com/mechanics) our official new show and video platform for IT professionals and tech enthusiasts. Shows comprise informative demos, how-tos and insights from the engineers and tech leaders behind our technology each Wednesday or as news breaks, all in around ten or so minutes.

Our first show airs Wednesday, October 28th with “An Early look at containers in Windows, Hyper-V and
Azure” – with Mark Russinovich

Stay informed, subscribe to Microsoft Mechanics to get updates on the latest shows and follow us on Twitter to take part in our launch sweepstakes.

Enjoy!

The post Microsoft Mechanics to bring you the latest on new server options appeared first on .



Source: onliveserver

Unveiling The Microsoft Cloud Platform System, powered by Dell

Yes!!! At last we can talk about what we have been working on for close to 18 months – “San Diego” – that was the code name for what is now “Microsoft Cloud Platform System” (CPS), which was announced earlier today by Scott Guthrie.

These 18 months have been a journey for all of us in the product group. It began, as all journeys should, as conversations with our customers. What followed has evolved how we think about and engineer our products. We were struck by the large number of customers who were failing to realize the benefits of the cloud. Running one of the largest public clouds, Microsoft Azure, we know what it takes to build and run a cloud, and we wanted to see how to take these learnings, be it architectural, design, operations, or technology and help you benefit from it. We have learned a lot along the way, and want to pass on all the knowledge we have gathered onto you.  In essence, CPS is the culmination of this experience. With Dell as our partner, we are thrilled to bring an Azure-consistent cloud-in-a-box to your datacenter.

CPS – A customer-focused journey to solution delivery

As we set out to build CPS, we engaged many of our Enterprise and Service Provider customers.  What we found to be common among them was the challenge of taking hardware and software components and building them into a system that yielded robust cloud services.  On this path too many customers were failing because of the challenges of complex system definition, hardware integration and software deployment & configuration.  In short, too many cloud computing projects could not fulfil the anticipated promise due to cost and complexity.

In building CPS, we decided to attack that problem armed with the experience of building and operating our Azure datacenters.  First, we embarked on a system design that harvested the principles of our Azure architecture and composed a stamp-architecture that is appropriate to run in customer datacenters. 

A core element of this design is the work we put into failure mode analysis.  One constant that we recognize when operating at scale is that failures will happen.  And yet business-critical services cannot be impacted by these failures.  The CPS system architecture includes redundancy in the physical infrastructure as well as intelligence in the software that makes the solution resilient to failures.

Designing a resilient system requires a careful balance between availability and the cost of service delivery.  As you design systems that are capable of surviving failures, it is easy to let costs balloon by over-engineering redundancy.  Working closely with Dell, we took decisions that allowed us to strike a careful balance between the cost of the infrastructure, and productivity of the system.  This “sweet spot” was achieved by leveraging proven hardware components together with Microsoft’s software-defined datacenter technologies. 

The last challenge that we took on was to minimize complexity.  We heard from many of you that perhaps the most difficult part of complex system design is the challenges around integration.  In CPS we worked directly with Dell and component manufacturers to ensure that drivers, firmware, software and configurations came together in a reliable way.  We have spent month’s operating and putting systems through some of the most rigorous testing that our engineers can produce. 

Now as we bring CPS to market, we have the confidence to not only stand behind our solution, but to proudly stand in front of it.  In CPS, we are offering a unified support model: Microsoft.  When customers encounter issues in a CPS environment, there is one number to call and that is ours. Of course if the issue lies in hardware, we will work with Dell to resolve it, but you as a customer are not burdened with figuring out who the responsible party is to resolve your problem.

CPS – An integrated hardware and software cloud solution

CPS is a pre-integrated, pre-deployed, Microsoft validated solution built on Dell hardware, Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack. It combines the efficiency and agility of cloud computing, along with the increased control and customization achieved in virtualized, multi-tenant environments. CPS scales from a single rack to up to four racks and is optimized for Infrastructure-as-a-Service (IaaS for Windows and Linux) and Platform-as-a-Service (PaaS) style deployments.

Let’s take a closer look at CPS

 

At the hardware layer, a customer can deploy CPS in increments from one to four racks. Each rack has

  • 512 cores across 32 servers (each with a dual socket Intel Ivy Bridge, E5-2650v2 CPU)
  • 8 TB of RAM with 256 GB per server
  • 282 TB of usable storage
  • 1360 Gb/s of internal rack connectivity
  • 560 Gb/s of inter-rack connectivity
  • Up to 60 Gb/s connectivity to the external world

A single rack can support up to 2000 VM’s (2 vCPU, 1.75 GB RAM, and 50 GB disk). You can scale up to 8000 VM’s using a full stamp with four of these racks. Of course customers have the flexibility of choosing their VM dimensions, as we have seen with the private preview deployments with CPS.

CPS uses software components that customers are familiar with (Windows Server 2012 R2, System Center 2012 R2 and Windows Azure Pack), so there is no retooling needed to operate CPS. It comes with integrated anti-virus, fabric based backup for all VM’s, disaster recovery, orchestrated patching, monitoring, an Azure-consistent self-service portal (Windows Azure Pack) for tenants, REST-based API for programmatic interaction and automation using PowerShell. CPS also provides PaaS services such as “Websites” and Database-as-a-service. There are no additional components to buy to make it a complete cloud solution.

There is a lot we want to share about CPS and this is just the start. We will be at TechEd, Europe in Barcelona, Spain (Oct 28 – 31) where we will have many sessions on CPS. In particular we have an overview session (CDP-B232) and an Architectural Deep Dive (CDP-B341) that would be a great way to get an understanding of what CPS is and how it works. You can find out more about CPS here. See you in Barcelona!

CPS Engineering Team          

The post Unveiling The Microsoft Cloud Platform System, powered by Dell appeared first on .



Source: onliveserver

Microsoft Loves Linux

In a press and analyst briefing a few months back, Microsoft CEO Satya Nadella put up a slide proclaiming “Microsoft ♥ Linux”.  Wow!  What a great slide and what a change for Microsoft!  The trade press picked up on this slide in a major way, with a number of articles echoing this new approach to Linux and open source within Microsoft.  And they’re right!

But you may ask “Why is Microsoft working with Linux and open source?”, or “What’s Microsoft’s plan going forward?”, or “What does ‘Microsoft ♥ Linux’ mean for me as a customer?”

At the core, “Microsoft ♥ Linux” is driven by what we’ve heard from you as customers.  You run workloads on Windows.  You run workloads on Linux.   You run these workloads in your on-premises datacenters, hosted at service providers, and in public clouds.  You just want it all to work, and to work together regardless of the operating system.  We hear you, and understand that your world is heterogeneous.  Bottom line, this is a business opportunity for Microsoft to offer heterogeneous support — both Windows and Linux workloads – on-premises and in a public cloud.  Microsoft can add real value in a heterogeneous cloud.

It may come as a surprise, but Microsoft has been working with Linux for a number of years.  System Center Operations Manager has offered Linux and UNIX monitoring since 2009.   Drivers for running Linux guests on Hyper-V became widely available for a number of distros in 2010, and we even have drivers for running FreeBSD guests on Hyper-V.  Microsoft Azure offered Linux VMs on “day 1” of the Azure IaaS general availability in 2013.

We’ve built a significant customer base that is using Linux with Microsoft products.  Several hundred thousand Linux and UNIX servers in production usage today are managed by System Center, with the largest customers managing nearly 10,000 Linux servers.  Customers such as Ancestry.com, Equifax, the United Kingdom government FCO Services, and Europcar operate Microsoft clouds on-premises running Hyper-V and System Center with many VMs running Linux.  More than 20% of the VMs in Azure IaaS are running Linux.  Azure is offering the HDInsight (Hadoop) service running on Linux in addition to running on Windows.  And if you look more broadly, Microsoft offers key productivity software such as Office365, Skype, and RDP clients on Linux-based and BSD-based client operating systems such as iOS, Android, and Mac OS X.

What does this all add up to?  Working with Linux isn’t new at Microsoft.  In fact, Linux is already a sizable commitment for Microsoft that is now getting a higher public profile.  We see executing on that commitment as a critical part of what we offer customers.

Linux in your datacenter

Microsoft is making huge investments in the foundational cloud technologies that are described in other entries in this blog series:  Compute, Networking, and Storage.  These investments are informed by our experience with the hyper-scale Azure public cloud.  They are also independent of the guest operating system, so they work for both Windows and Linux.  Great features like storage quality-of-service, network virtualization, and super-fast live migration using RDMA work for Linux just like they work for Windows.  In the product development teams, when we envision and design new capabilities for the cloud foundation, we ask “How does this work for Windows?” and we ask “How does this work for Linux?”   As a result, the Microsoft offering for on-premises datacenters is fundamentally heterogeneous, able to run Windows and Linux guests in a unified fashion.

Of course, some capabilities require the cooperation of the guest OS.  For these capabilities, Microsoft developers write the Linux device driver code for Hyper-V and participate in the Linux community to get the code into the upstream Linux kernel at kernel.org.  Then we engage with distro vendors like Red Hat, Canonical, Oracle, and SUSE to enable full support on Hyper-V for these distros that you are probably running.  As a result, Linux runs great on Hyper-V!

We also invest in the management layer.  We are announcing that the first version of Powershell Desired State Configuration (DSC) for Linux is now available. With DSC for Linux, you can do consistent configuration management across Windows and Linux.  On Linux you can install packages, configure files, create users & groups, and set up services.   DSC for Linux is also an open source project, available on GitHub.

Our enterprise management functionality in System Center Operations Manager, Configuration Manager, Virtual Machine Manager, and Data Protection Manager manages Linux right alongside Windows so that you can have a single systems management infrastructure for your heterogeneous datacenter.   We’ve taken System Center management beyond just the Linux operating system, and into open source middleware such as Tomcat, JBoss, Apache Web Server, and MySQL.  Also, we have extended our hybrid services to include Linux — for example, Azure Site Recovery between on-premises datacenters (or service providers) and Azure.

Linux in Microsoft Azure

As we’re doing for the on-premises datacenter, Microsoft is making huge investments in the Azure public cloud.  Again, our goal is that everything in Azure works for Linux VMs just like it works for Windows VMs.  Capabilities like the huge “G” series VM sizes, Premium Storage, and Azure Backup for VMs are available for both Windows and Linux, as is a range of extensions for custom scripting, regaining access, and OS patching.  Some capabilities, such as integration with Docker, Chef, and other open source projects, are available to you on Linux before they are available on Windows.

Azure offers a range of enterprise-ready Linux distros in Azure:  SUSE Linux Enterprise Server, openSUSE, Ubuntu Linux, Oracle Linux, and Core OS, as well as community distro such as CentOS.  Or you can upload your own custom Linux image.

If you are consuming Azure services, you want flexibility to access those services from a Windows computer, or from a Linux or Mac OS X computer.  For starters, you’ve probably used the Azure portal, which is an HTML5 web application that works in browsers running on Windows as well as browsers on Linux and Mac OS X.  But as your usage progresses, you may want to integrate Azure into your operational processes.  On Windows, Powershell is the primary scripting and automation interface.   For Linux and Mac OS X (and Windows), Azure offers a node.js-based package of commands for scripting and automating the full lifecycle of Azure services.

In Azure datacenters, Microsoft personnel are now operating PaaS services based on Linux as well as services based on Windows.  The HDInsight (Hadoop) service is the first to be available on Linux, and it makes good business sense for other services using “born on Linux” open source projects to just run on Linux rather than being ported to Windows.  Internal tools for monitoring, diagnosing, patching, and meeting compliance requirements have been extended to include these Linux-based services.

Summary

Microsoft is doing a lot of work with Linux – for on-premises datacenters and services providers, as well as in the Azure public cloud.  We know you run workloads on both Windows and Linux.  We’ve made running and managing Linux workloads a fundamental part of our product offering so that the result is well integrated and just works.  Go to www.microsoft.com/open to learn more about the investments we’re making.  Remember, “Microsoft ♥ Linux”!

The post Microsoft Loves Linux appeared first on .



Source: onliveserver

August updates for Windows Server 2012 R2

In today’s Windows Experience blog, we communicated our intent to deliver regular improvements to Windows in order to address feedback quickly while continuing to bring you an enterprise-class platform.  We also provided a heads up that the next update for Windows happens on August 12. 

On August 12, we will also release an update to Windows Server 2012 R2.  In addition to regular security updates, this update will deliver bug fixes that will improve performance and reliability for your infrastructure.  There are no changes to system APIs, your applications should “just work” without the need for re-certification or re-validation. 

Similar to Windows 8.1, we will make these Windows Server 2012 R2 updates available for consumption starting August 12th through existing distribution mechanisms, including Windows Update and Windows Server Update Services.

The post August updates for Windows Server 2012 R2 appeared first on .



Source: onliveserver

What’s new in Windows Server 2016 Technical Preview 2

Earlier today we announced the release of Windows Server 2016 Technical Preview 2. We hope that in the coming weeks, you’ll take the time to try the preview and experience the new features first-hand. But to give you a snapshot of the technology innovation being delivered, we have compiled a favorites list. This list isn’t intended to be a full catalog of what’s coming. Our goal is to show off how new approaches to infrastructure are going to make a material difference in the way you approach IT challenges. Technology innovation fuels business innovation, and we’re excited to see the ways that our customers are going to use these new features to drive competitive value. So let’s take a look at what made the highlights reel.

Compute and Virtualization: Simplified upgrades, new installment options, and increased resilience, helping you ensure the stability of the infrastructure without limiting agility.

  1. Rolling upgrades for Hyper-V and scale-out file server clusters for faster adoption of new operating systems
  2. Functionality for hot add and remove memory and NIC, reducing downtime
  3. Virtual machine compute resiliency, so that virtual machines continue running even if the compute cluster fabric service fails
  4. Nano Server, a deeply refactored version of Windows Server with a small footprint and remotely managed installation, optimized for the cloud and a DevOps workflow

Networking: Continued investment to make networking as flexible and cost-effective as possible while ensuring high performance.

  1. Converged NIC across tenant and RDMA traffic to optimize costs, enabling high performance and network fault tolerance with only 2 NICs instead of 4
  2. PacketDirect on 40G to optimize performance

Storage: Expanding capabilities in software-defined storage with an emphasis on resilience, reduced cost, and increased control.

  1. Virtual Machine Storage Path resiliency, enabling virtual machines to pause and restart gracefully in response to either transient or permanent storage path failures
  2. Storage Spaces Direct to enable aggregation of Storage Spaces across multiple servers, pushing the cost of storage down while allowing for increased scale out
  3. Storage quality of service (QoS) for more control and predictable performance
  4. Storage Replica, giving you synchronous storage replication for affordable business continuity and disaster recovery strategies

Security and Assurance: Protecting against today’s threats with a “zero-trust” approach to security that is rooted in the hardware.

  1. New Host Guardian Service, part of a trust and isolation boundary between the cloud infrastructure and guest OS layers
  2. Just Enough Administration to reduce the risk of security breaches by allowing users to perform only specific tasks

Management: Ongoing advances to simplify server management and increase consistency in approach.

  1. PowerShell Desired State Configuration (DSC) for easier, consistent and faster deployment and updates.
  2. PowerShell Package Manager  for unified package management and deployment
  3. Windows Management Framework 5.0 April Preview and DSC Resource Kit  (available online simultaneously with TP2)

And much more, including new features for IIS, RDS, and AD such as:

  1. Conditional access control in AD FS, allows requiring a device compliant with policies to access resources
  2. Support for application authentication with OpenID Connect and OAuth, making it easier to build mobile enterprise applications
  3. Full OpenGL support with RDS for VDI scenarios
  4. Server-side support for HTTP/2 including header compression, connection multiplexing and server push.

So what’s the next step? Check out the Windows Server 2016 Technical Preview 2 here, and start learning more about what’s new and notable.

Please note that this is pre-released software; features and functionality may differ in the final release.

The post What’s new in Windows Server 2016 Technical Preview 2 appeared first on .



Source: onliveserver