Tag Archives: Deployment

Developers could ease DevOps deployment with CircleCI Orbs

CI/CD platform provider CircleCI has introduced a suite of 20 integrations that automate deployment and were developed with prominent partners including AWS, Azure, Google Cloud, VMware and Salesforce.

These integrations, known as CircleCI Orbs, enable developers to quickly automate deployments directly from their CI/CD pipelines. CircleCI launched Orbs in November 2018, and today there are more than 1,200 listed in its registry. But users created the vast majority of them; the difference with CircleCI’s internally created orbs is that they’re backed by vendor support.

CircleCI Orbs are shareable configuration packages for development builds, said Tom Trahan, CircleCI’s vice president of business development. The orbs define reusable commands, executors and jobs so that commonly used pieces of configuration can be condensed into a single line of code, he said.

The process of automating deployment can be challenging, which is why CircleCI added this suite of out-of-the-box integrations.

Orbs have two primary benefits for developers, said Chris Condo, an analyst at Forrester Research. “They can be certified by the third parties that create them, and they are maintainable pieces of code that contain logic, actions and connections to CD [continuous delivery] capabilities,” he said.

The orbs help CircleCI operate in an increasingly competitive market that includes open-source Jenkins as well as the commercial CloudBees Jenkins Platform, GitLab and GitHub, as well as cloud platform providers such as AWS and Microsoft.

Orbs are very similar in design to the best package managers that you see — like npm for Node.js, or like the Java library or Ruby Gems.
Tom TrahanVice president of business development, CircleCI

“When we launched Orbs, it was because our customers were asking us for a way to operate the same way that they operate within the broader open source world, particularly when you think about open source frameworks for various languages,” Trahan said. “Orbs are very similar in design to the best package managers that you see — like npm for Node.js, or like the Java library or Ruby Gems.”

These are all frameworks created so that bundles of code could be packaged up and made available to developers, which is what the CircleCI Orbs do, Trahan added.

Developers don’t want to have to “reinvent the wheel,” when they can simply access bundles of code and best practices that others have already developed, he said.

Multi-cloud trend drives need for easier deployment

Anything that removes boring configuration work from a developer’s plate is likely to be welcome, said James Governor, an analyst at RedMonk, based in Portland, Maine.

“CircleCI building out a catalog of deployment orbs makes a lot of sense, particularly as the market becomes increasingly multi-cloud oriented,” Governor said. “Enterprises want to see their vendors offer a wide range of supported platforms. The Orb approach allows for standardized, repeatable deployments and rollbacks.”

However, the process of automating deployments can be problematic for some teams because of the time it takes to write integrations with services such as AWS ECS or Google Cloud Run, Trahan said. The CircleCI deployment orbs are designed to limit the complexity and time spent creating integrations.

“Customers are asking for simpler ways to connect their dev and CD processes; Orbs helps them do that,” Forrester’s Condo said. “So I see Orbs as a very nice evolutionary step that allows teams to build maintainable abstractions between their development and deployment processes.”

How commercially successful the new suite of Orbs will be remains to be seen, but conceptually, the approach has been embraced by CircleCI users. Since their launch in November 2018, CircleCI orbs are now used by 13,000 user organizations, with around 40,000 repositories and nine million CI/CD pipelines, Trahan said.

Pricing for CircleCI’s CI/CD pipeline services is free for small teams and starts at $30 a month for teams with four or more developers. Pricing for enterprise customers starts at $3,000 a month. The orbs are free for all CircleCI users.

Go to Original Article
Author:

How to manage Exchange hybrid mail flow rules

An Exchange hybrid deployment generally provides a good experience for the administrator, but it can be found lacking in a few areas, such as transport rules.

Transport rules — also called mail flow rules — identify and take actions on all messages as they move through the transport stack on the Exchange servers. Exchange hybrid mail flow rules can be tricky to set up properly to ensure all email is reviewed, no matter if mailboxes are on premises or in Exchange Online in the cloud.

Transport rules solve many compliance-based problems that arise in a corporate message deployment. They add disclaimers or signatures to messages. They funnel messages that meet specific criteria for approval before they leave your control. They trigger encryption or other protections. It’s important to understand how Exchange hybrid mail flow rules operate when your organization runs a mixed environment.

Mail flow rules and Exchange hybrid setups

The power of transport rules stems from their consistency. For an organization with compliance requirements, transport rules are a reliable way to control all messages that meet defined criteria. Once you develop a transport rule for certain messages, there is some comfort in knowing that a transport rule will evaluate every email. At least, that is the case when your organization is only on premises or only in Office 365.

Things change when your organization moves to a hybrid Exchange configuration. While mail flow rules evaluate every message that passes through the transport stack, that does not mean that on-premises transport rules will continue to evaluate messages sent to or from mailboxes housed in Office 365 and vice versa.

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

Depending on your routing configuration, email may go from an Exchange Online mailbox and out of your environment without an evaluation by the on-premises transport rules. It’s also possible that both the mail flow rules on premises and the other set of mail flow rules in Office 365 will assess every email, which may cause more problems than not having any messages evaluated.

To avoid trouble, you need to consider the use of transport rules both for on-premises and for online mailboxes and understand how the message routing configuration within your hybrid environment will affect how Exchange applies those mail flow rules.

Message routing in Exchange hybrid deployments

A move to an Exchange hybrid deployment requires two sets of transport rules. Your organization needs to decide which mail flow rules will be active in which environment and how the message routing configuration you choose affects those transport rules.

All message traffic that passes through an Exchange deployment will be evaluated by the transport rules in that environment, but the catch is that an Exchange hybrid deployment consists of two different environments, at least when they relate to transport rules. A message sent from an on-premises mailbox to another on-premises mailbox generally won’t pass though the transport stack, and, thus, the mail flow rules, in Exchange Online. The opposite is also true: Messages sent from an online mailbox to another online mailbox in the same tenant will not generally pass though the on-premises transport rules. Copying the mail flow rules from your on-premises Exchange organization into your Exchange Online tenant does not solve this problem, but that can lead to some messages being handled by the same transport rule twice.

When you configure an Exchange hybrid deployment, you need to decide where your mail exchange (MX) record points. Some organizations choose to have the MX record point to the existing on-premises Exchange servers and then route message traffic to mailboxes in Exchange Online via a send connector. Other organizations choose to have the MX record point to Office 365 and then flow to the on-premises servers.

There are more decisions to be made about the way email leaves your organization as well. By default, an email sent from an Exchange Online mailbox to an external recipient will exit Office 365 directly to the internet without passing through the on-premises Exchange servers. This means that transport rules, which are intended to evaluate email traffic before it leaves your organization, may never have that opportunity.

Exchange hybrid mail flow rules differ for each organization

No two organizations are alike, which means there is more than one resolution for working with Exchange hybrid mail flow rules.

For organizations that want to copy transport rules from on-premises Exchange Server into Exchange Online, you can use PowerShell. The Export-TransportRuleCollection PowerShell cmdlet works on all currently supported versions of on-premises Exchange Server. This cmdlet creates an XML file that you can load into your Exchange Online tenant with another cmdlet called Import-TransportRuleCollection. This is a good first step to ensure all mail flow rules are the same in both environments, but that’s just part of the work.

Transport rules, like all Exchange Server features, have evolved over time. They may not work the same in all supported versions of on-premises Exchange Server and Exchange Online. Simply exporting and importing your transport rules may cause unexpected behavior.

One way to resolve this is to duplicate transport rules in both environments by adding two more transport rules on each side. The first new transport rule checks the message header and tells the transport stack — both on premises and in the cloud — that the message has already been though the transport rules in the other environment. This rule should include a statement to stop processing any further transport rules. A second new transport rule should add to the header with an indication that the message has already been though the transport rules in one environment. This is a difficult setup to get right and requires a good deal of care to implement properly if you choose to go this route.

I expect that the fairly new hybrid organization transfer feature of the Hybrid Configuration Wizard will eventually handle the export and import of transport rules, but that won’t solve the routing issues or the issues with running duplicate rules.

Go to Original Article
Author:

Google joins bare-metal cloud fray

Google has introduced bare-metal cloud deployment options geared for legacy applications such as SAP, for which customers require high levels of performance along with deeper virtualization controls.

“[Bare metal] is clearly an area of focus of Google,” and one underscored by its recent acquisition of CloudSimple for running VMware workloads on Google Cloud, said Deepak Mohan, an analyst at IDC.

Deepak MohanDeepak Mohan

IBM, AWS and Azure have their own bare-metal cloud offerings, which allow them to support an ESXi hypervisor installation for VMware, and Bare Metal Solution will apparently underpin CloudSimple’s VMware service on Google, Mohan added.

But Google will also be able to support other workloads that can benefit from bare metal availability, such as machine learning, real-time analytics, gaming and graphical rendering. Bare-metal cloud instances also avert the “noisy neighbor” problem that can crop up in virtualized environments as clustered VMs seek out computing resources, and do away with the general hit to performance known commonly as the “hypervisor tax.”

Google’s bare-metal cloud instances offer a dedicated interconnect to customers and tie into all native Google Cloud services, according to a blog post. The hardware has been certified to run “multiple enterprise applications,” including ones built on top of Oracle’s database, Google said.

Oracle, which lags far behind in the IaaS market, has sought to preserve some of those workloads as customers move to the cloud.

This is clearly an area of focus of Google.
Deepak MohanAnalyst, IDC

Earlier this year, it formed a cloud interoperability partnership with Microsoft, pushing a use case wherein customers could run enterprise application logic and presentation tiers on Azure infrastructure, while tying back to an Oracle database running on bare-metal servers or specialized Exadata hardware in Oracle’s cloud.

Not all competitive details laid bare

Overall, bare-metal cloud is a niche market, but by some estimates it is growing quickly.

Among hyperscalers such as AWS, Google and Microsoft, the battleground is in early days, with AWS only making its bare-metal offerings generally available in May 2018. Microsoft has mostly positioned bare metal for memory-intensive workloads such as SAP HANA, while also offering it underneath CloudSimple’s VMware service for Azure.

Meanwhile, Google’s bare-metal cloud service is fully managed by Google, provides a set of provisioning tools for customers, and will have unified billing with other Google Cloud services, according to the blog.

How smoothly this all works together could be a key differentiator for Google in comparison with rival bare-metal providers. Management of bare-metal machines can be more granular than traditional IaaS, which can mean increased flexibility as well as complexity.

Google’s Bare Metal Solution instances are based on x86 systems that range from 16 cores with 384 GB of DRAM, to 112 cores with 3,072 GB of DRAM. Storage comes in 1 TB chunks, with customers able to choose between all-flash or a mix of storage types. Google also plans to offer custom compute configurations to customers with that need.

It also remains to be seen how price-competitive Google is on bare metal compared with competitors, which includes providers such as Packet, CenturyLink and Rackspace.

The company didn’t immediately provide costs for Bare Metal Solution instances, but said the hardware can be purchased via monthly subscription, with the best deals for customers that sign 36-month terms. Google won’t charge for data movement between Bare Metal Solution instances and general-purpose Google Cloud infrastructure if it occurs in the same cloud region.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author:

How to keep VM sprawl in check

During the deployment of virtual environments, the focus is on the design and setup. Rarely are the environments revisited to check if improvements are possible.

Virtualization brought many benefits to data center operations, such as reliability and flexibility. One drawback is it can lead to VM sprawl and the generation of more VMs that contend for a finite amount of resources. VMs are not free; storage and compute have a real capital cost. This cost gets amplified if you look to move these resources into the cloud. It’s up to the administrator to examine the infrastructure resources and make sure these VMs have just what they need because the costs never go away and typically never go down.

Use Excel to dig into resource usage

One of the fundamental tools you need for this isn’t Hyper-V or some virtualization product — it’s Excel. Dashboards are nice, but there are times you need the raw data for more in-depth analysis. Nothing can provide that like Excel.

Most monitoring tools export data to CSV format. You can import this file into Excel for analysis. Shared storage is expensive, so I always like to see a report on drive space. It’s interesting to see what servers consume the most drive space, and where. If you split your servers into a C: for the OS and D: for the data, shouldn’t most of the C: drives use the same amount of space? Outside of your application install, why should the C: drives vary in space? Are admins leaving giant ISOs in the download folder or recycle bin? Or are multiple admins logging on with roaming profiles?

Whatever the reason, runaway C: drives can chew up your primary storage quickly. If it is something simple such as ISO files that should have been removed, keep in mind that this affects your backups as well. You can just buy additional storage in a pinch and, because often many us in IT are on autopilot mode, it’s easy to not give drive space issues a second thought.

Overallocation is not as easy to correct

VM sprawl is one thing but when was the last time you looked at what resources you allocated to those VMs to see what they are actually using? The allocation process is still a little bit of a guess until things get up and running fully. Underallocation is often noticed promptly and corrected quickly, and everything moves forward.

A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Do you ever check for overallocation? Do you ever go back and remove extra CPU cores or RAM? In my experience, no one ever does. If everything runs well, there’s little incentive to make changes.

Some in IT like to gamble and assume everything will run properly most of the time, but it’s less stressful to prepare for some of these unlikely events. Is it possible that a host or two will fail, or that a network issue strikes your data center? You have to be prepared for failure and at a scale that is more than what you might think. We all know things will rarely fail in a way that is favorable to you. A review process could reveal places that could use an adjustment to drain resources from overallocated VMs to avoid trouble in the future.

Look closer at all aspects of VM sprawl to trim costs

Besides the resource aspect what about the licensing cost? With more and more products now allocating by core, overallocation of resources has an instant impact on the application cost to start but it gets worse. It’s the annual maintenance costs that pick at your budget and drain your resources for no gain if you cannot tighten your resource allocation.

One other maintenance item that gets overlooked is reboots. When a majority of Windows Server deployments moved from hardware to virtualization, the runtime typically increased. This increase in stability brought with it an inadvertent problem. Too often, busy IT shops without structured patching and reboot cycles only performed these tasks when a server went offline, which — for better or worse — created a maintenance window.

With virtualization, the servers tend to run for longer stretches and show more unique issues. Memory leaks that might have gone unnoticed before — because they were reset during a reboot — can affect servers in unpredictable ways. Virtualization admins need to be on alert to recognize behaviors that might be out of the norm. If you right-size your VMs, you should have enough resources for them to run normally and still handle the occasional spikes in demand. If you see your VMs requiring more resources than normal, this could point to resource leaks that need to be reset.

Often, the process to get systems online is rushed, leads to VM sprawl and overlooks any attempts at optimization. This can be anything from overallocations to simple cleanup. If this isn’t done, you lose out on ways to make the environment more efficient, losing both performance and capacity. While this all makes sense, it’s important to follow through and actually do it.

Go to Original Article
Author:

Watch Communications and Microsoft announce partnership to bring broadband internet to Indiana, Ohio and Illinois – Stories

Deployment of technologies, including TV white spaces, is expected to cover more than four million people in the region, including 815,000 people in rural areas currently without access to broadband

REDMOND, Wash. — July 9, 2019 — On Tuesday, Watch Communications and Microsoft Corp. announced an agreement aimed at closing the broadband gap, and the rural digital divide in particular, in the states of Indiana, Ohio and Illinois. The partnership is part of the Microsoft Airband Initiative, which is focused on extending broadband access to three million people in rural America by July 2022.

The FCC reports that more than 21 million Americans lack broadband access. According to Microsoft data, 162 million people across the United States are not using the internet at broadband speeds, including approximately 17 million people in Indiana, Ohio and Illinois. Watch Communications will deploy a variety of broadband connectivity technologies to bring these areas under coverage, with an emphasis on wireless technologies leveraging TV white spaces (e.g., unused TV frequencies) in lower population density or terrain-challenged areas to achieve improved coverage. The areas expected to benefit include 50 counties in Indiana, 22 counties in Illinois, and most counties in Ohio.

“Every person deserves the same opportunity. But too often and in too many places, these opportunities are limited by where people live and their access to reliable and affordable broadband access,” said Shelley McKinley, general manager, Technology and Corporate Responsibility, Microsoft. “Microsoft is working across the country to close this gap. We’re partnering with Watch Communications to improve broadband access in Indiana, Illinois and Ohio and build on the incredible work being done by state and local leaders on this issue on behalf of their citizens.”

“Public-private partnerships, collaboration and understanding local initiatives are key to enabling connectivity success. Providing rural broadband can be difficult, so working as a team to solve the digital divide requires partners. We are excited to partner with Microsoft on this initiative,” said Greg Jarman, chief operating officer, Watch Communications.

Improved connectivity will bolster economic, educational and telehealth opportunities for everyone in the region, and could be particularly impactful for this region’s farmers. Together, Indiana, Illinois and Ohio account for more than $38.5 billion in agricultural value, with all three ranking in the top 16 states by agricultural output, according to the USDA. With broadband access, farmers can take advantage of advanced technologies such as precision agriculture which can help better monitor crops and increase yields.

In addition, Watch Communications and Microsoft will work together to ensure that once connectivity is available, people know how to use it and can get the training needed to fully participate in the digital economy, access educational opportunities and access telemedicine.

***

State by State View

Indiana

This is Microsoft’s first Airband Initiative deployment in Indiana. The need for improved connectivity is acute — the FCC broadband mapping report shows that more than 673,000 people in Indiana do not have access to broadband, and Microsoft data suggests that more than 4.3 million people are not using the internet at broadband speeds in the state. The partnership between Watch Communications and Microsoft is expected to cover more than 1 million Hoosiers, more than 440,000 of whom are people in rural areas that are currently unserved.

Watch Communications was a recent award winner of funds from the FCC to extend broadband services in Indiana. As a result, Watch Communications has been working with Indiana counties to develop the deployment approach that best meets the needs of the local communities. In addition to broadband, Watch Communications has been working to use its network to design an IoT network to serve Indiana businesses.

This also builds on Microsoft’s presence in Indiana. Last October, Microsoft and the Markle Foundation announced the launch of Skillful Indiana, focused on bringing investment, training, tools, and innovative methods to support workforce development in the state. In addition, the Hope FFA chapter in Indiana was recently awarded Microsoft FarmBeats Student Kits, which will help FFA students develop essential digital skills for precision agriculture and IoT technologies.

Ohio

Watch Communications was a recent award winner of funds from the FCC to extend broadband services in Ohio. As a result, Watch Communications has been working with Ohio counties to develop the deployment approach that best meets the needs of the local communities.

“You can’t be a part of the modern economy or education system without access to high-speed internet, and we are taking steps in Ohio to extend broadband to those who are underserved across the state,” said Lt. Governor Jon Husted. “Thank you to Microsoft for being among the leaders on this and for being willing to consider innovative solutions to help extend opportunity to people in Ohio who need it.”

This is Microsoft’s second Airband Initiative deployment in Ohio, following an August 2018 agreement between Microsoft and Agile Networks. The need for improved connectivity is acute — the FCC broadband mapping report shows that more than 621,000 people in Ohio do not have access to broadband, while Microsoft data suggests that more than 6.9 million people are not using the internet at broadband speeds in the state. The partnership between Watch Communications and Microsoft is expected to cover approximately 2.5 million people, more than 288,000 of whom are people in rural areas that are currently unserved.

This also builds on Microsoft’s presence in Ohio. Microsoft’s TEALS program is helping to deliver computer science education to Ohio students. In addition, the Arcadia FFA chapter and Triad-OHP FFA chapter in Ohio were recently awarded Microsoft FarmBeats Student Kits, which will help FFA students develop essential digital skills for precision agriculture and IoT technologies.

Illinois

This is Microsoft’s second Airband Initiative deployment in Illinois, the first being a September 2018 agreement between Microsoft and Network Business Systems to bring broadband internet to people in Illinois, Iowa and South Dakota. The need for improved connectivity is acute — the FCC broadband mapping report shows that more than 680,000 people in Illinois do not have access to broadband, while Microsoft data suggests that more than 6.6 million people are not using the internet at broadband speeds in the state. The partnership between Watch Communications and Microsoft is expected to cover more than 275,000 people, more than 80,000 of whom are people in rural areas that are currently unserved.

About Watch Communications

Founded in 1992, Watch Communications is an Internet Service Provider (ISP) using a combination of fixed wireless and fiber technologies to serve residential and business customers throughout Ohio, Indiana and Illinois. Watch Communications began as a wireless cable TV provider and expanded service offerings in 1998 to include Internet. Since its creation, Watch Communications has focused on unserved and underserved small and rural markets.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, [email protected]

Lindsey Gardner, Watch Communications Media Requests, (419) 999-2824, [email protected]

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at http://news.microsoft.com. Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft’s Rapid Response Team or other appropriate contacts listed at http://news.microsoft.com/microsoft-public-relations-contacts.

Go to Original Article
Author: Microsoft News Center

Attala Systems tackles RoCE shortcomings for NVMe flash

Attala Systems is taking steps to ease the deployment of its composable storage infrastructure.

The San Jose, Calif., startup claims it has hardened Remote Direct Memory Access over Converged Ethernet (RoCE) networking to tolerate dropped packets on lossy leaf-spine networks. Attala said the technique enables RoCE-based nonvolatile memory express (NVMe) flash storage to span clusters of bare-metal racks.

Attala customers also are testing the company’s multi-tenant hot data lake software that lets disparate workload clusters access shared directories and immutable files, Attala said. That software is scheduled for general availability later in 2018.

Attala and Super Micro Computer have teamed to launch the 1U Intelligent Storage Node on the Enterprise & Data Center SSD Form Factor, an emerging standard for Intel “ruler” SSDs. The flash systems are available directly from Attala now, with a Super Micro SKU to follow.

Attala’s high-performance composable infrastructure is serverless hardware. Compute, networking and storage reside on Attala custom field-programmable gate arrays (FPGAs), designed on a chipset by Intel-owned Altera. Hyperscale providers and companies with large private clouds are the primary target for the novel architecture, in which NVMe SSDs can be mapped to an individual application within a cluster. Attala started hardware shipments in August.

Attala Systems NVMe JBOF
Attala’s NVMe JBOF (just a bunch of flash)

Data loss and retransmission

RoCE-based networking typically gets configured as one switch within a rack. This is largely due to the technological challenge of configuring a multiswitch lossy environment so it behaves as a lossless network. Ordinarily, leaf-spine networks lack a method of flow control to recover data packets lost in transit.

RDMA originated as the transport layer for InfiniBand. Later versions of RDMA technology were adapted for Ethernet networking.

CEO Taufik Ma said Attala Systems added error recovery to RoCE that enables data centers to use standard layer 2 NICs on a lossy network. Attala engineers contributed code to harden the open source Soft-RoCE driver in an effort to “unshackle” NVMe over RoCE from a single server.

“We went in and patched some of the outstanding issues in Soft-RoCE to ease deployment for customers. All they need to do is plug in our FPGA-based storage node and download some upstream Soft-RoCE driver software on the host,” Ma said.

Attala’s patch is part of the upstream kernel and is expected to work its way into Linux distributions, Ma said.

“There is always going to be a very low rate of packet loss when you’re going across multiple racks. What Attala has done is plug in to Soft-RoCE and found a way to detect packet loss and retransmit it,” said Howard Marks, the founder and chief scientist at analyst firm DeepStorage.

Composable infrastructure is a method to pool racks of IT hardware components. Compute, networking and storage are treated as separate resources that applications consume as needed. The objective is to avoid overprovisioning and unused capacity.

Composability has similarities but also key differences to converged infrastructure and hyper-convergence. Major storage vendors that have released composable infrastructure include Dell EMC, Hewlett Packard Enterprise and Western Digital Corp.

The challenge for Attala Systems is twofold: convincing the market that FPGAs represent the next wave in disaggregated architecture and gaining a foothold within major cloud data centers.

Other challengers are coming out with products that integrate NVMe over Fabrics as a layer of silicon. Kazan Networks has an ASIC-based bridge for insertion on an I/O board, while Solarflare Communications and stealth vendor Lightbits Labs are pushing NVME over TCP as a transport mechanism. Network specialist Mellanox Technologies is getting in on the action as well, developing a smart network interface controller with ARM cores to handle RAID and deduplication for just a bunch of flash (JBOF) arrays.

Marks said the market hasn’t matured for composable storage products to determine which design will prevail.

“NVMe over Fabrics isn’t deployed yet at large scale. When it starts to threaten SCSI, then people are going to switch whole hog to NVMe. By that time, enterprises will go with NVMe over lossless Fibre Channel, and hyperscalers will go with NVMe over TCP,” Marks said.

Multi-tenant data lake use cases include AI

Ma said the Attala Data Lake software can be used as a stand-alone repository or an add-on module to extend an existing data lake. Intended use cases include write once, read many files shared by different application clusters, including AI training data, network and transaction logs.

The hot data is stored in Attala Systems’ scale-out storage nodes. Requested data is integrated directly in an application’s native file system.

The Attala-Super Micro EDSFF JBOF uses a cut-through chassis with four 50 Gigbabit Ethernet ports. The box accepts 32 Intel ruler SSDs. Users can slice up the capacity in increments and assign storage to different namespaces. Some of the capacity needs to be reserved for third-party RAID and data services. Attala said a sample four-node configuration provides up to 4 petabytes of raw storage, 16,384 volumes and up to 22 million IOPS.

Simplifying IT with the latest updates from Windows Autopilot – Microsoft 365 Blog

With Windows Autopilot, our goal is to simplify deployment of new Windows 10 devices by eliminating the cost and complexity associated with creating, maintaining, and loading custom images. Windows Autopilot will revolutionize how new devices get deployed in your organization—now you can deliver new off-the-shelf Windows 10 devices directly to your users. With a few simple clicks, the device transforms itself into a fully business-ready state, dramatically reducing the time it takes for your users to get up and running with new devices.

Not only does Windows Autopilot significantly reduce the cost of deploying Windows 10 devices but also delivers an experience that’s magical for users and zero-touch for IT.

I’m excited to share that we are extending that zero-touch experience even further with several new capabilities available in preview with the Windows Insider Program today.

  • Self-Deploying mode—Currently, the Windows Autopilot experience requires the user to select basic settings like Region, Language, and Keyboard, and also enter their credentials, in the Windows 10 out-of-the-box experience. With a new Windows Autopilot capability called “Self-Deploying mode,” we’re extending the zero-touch experience from IT to the user deploying the device. Power on* is all it takes to deploy a new Windows 10 device into a fully business-ready state—managed, secured, and ready for usage—no need for any user interaction. You can configure the device to self-deploy into a locked down kiosk, a digital signage, or a shared productivity device—all it takes is power on.*
  • Windows Autopilot reset—This feature extends the zero-touch experience from deployment of new Windows 10 devices to reset scenarios where a device is being repurposed for a new user. We’re making it possible to completely reset and redeploy an Intune-managed Windows 10 device into a fully business-ready state without having to physically access the device. All you need to do is click a button in Intune!

Windows Insiders can test these features with the latest Windows 10 build and Microsoft Intune now.

I cannot wait to see the feedback from the Insider community! To see how this works, and several exciting updates to Windows Autopilot, check out this quick video:

 Source video.

You can head over to the Windows IT Pro blog right now for further details.

One final note: A big part of what we build is based on feedback from our customers. With this in mind, we also added several new Windows Autopilot capabilities into the Windows 10 April 2018 Update (version 1803) based on feedback, and these capabilities are also available today:

  • Enrollment Status page—We received tons of feedback from Windows Autopilot customers who want the ability to hold the device in the out-of-box setup experience until the configured policies and apps have been provisioned to the device. This enables IT admins to be assured the device is configured into a fully business-ready state prior to users getting to the desktop. This is made possible with a capability called “Enrollment Status” and is available today with Windows 10 April 2018 Update (version 1803) and Microsoft Intune.
  • Device vendor supply chain integration—We enabled Windows 10 OEMs and hardware vendors to integrate Windows Autopilot into their supply chain and fulfillment systems so that devices are registered in Windows Autopilot to your organization the moment your purchase is fulfilled. This makes the registration of Windows Autopilot devices completely hands-free and zero-touch for you as well as your device vendor/OEM. Contact your device reseller to find out if they are supporting Windows Autopilot.
  • Automatic Windows Autopilot profile assignment—We integrated Azure Active Directory (AD) dynamic groups with Windows Autopilot and Microsoft Intune to deliver a zero-touch experience for Windows Autopilot profile assignments on all Windows Autopilot devices.

I said this in my prior post and I’ll say it again—Windows Autopilot is an absolute game changer. I urge you to spend some time learning more about it.

To learn more about how to use Windows Autopilot and Co-Management together, check out this quick video.

*Requires network connection and TPM2.0.

Simplifying IT with the latest updates from Windows Autopilot – Microsoft 365 Blog

With Windows Autopilot, our goal is to simplify deployment of new Windows 10 devices by eliminating the cost and complexity associated with creating, maintaining, and loading custom images. Windows Autopilot will revolutionize how new devices get deployed in your organization—now you can deliver new off-the-shelf Windows 10 devices directly to your users. With a few simple clicks, the device transforms itself into a fully business-ready state, dramatically reducing the time it takes for your users to get up and running with new devices.

Not only does Windows Autopilot significantly reduce the cost of deploying Windows 10 devices but also delivers an experience that’s magical for users and zero-touch for IT.

I’m excited to share that we are extending that zero-touch experience even further with several new capabilities available in preview with the Windows Insider Program today.

  • Self-Deploying mode—Currently, the Windows Autopilot experience requires the user to select basic settings like Region, Language, and Keyboard, and also enter their credentials, in the Windows 10 out-of-the-box experience. With a new Windows Autopilot capability called “Self-Deploying mode,” we’re extending the zero-touch experience from IT to the user deploying the device. Power on* is all it takes to deploy a new Windows 10 device into a fully business-ready state—managed, secured, and ready for usage—no need for any user interaction. You can configure the device to self-deploy into a locked down kiosk, a digital signage, or a shared productivity device—all it takes is power on.*
  • Windows Autopilot reset—This feature extends the zero-touch experience from deployment of new Windows 10 devices to reset scenarios where a device is being repurposed for a new user. We’re making it possible to completely reset and redeploy an Intune-managed Windows 10 device into a fully business-ready state without having to physically access the device. All you need to do is click a button in Intune!

Windows Insiders can test these features with the latest Windows 10 build and Microsoft Intune now.

I cannot wait to see the feedback from the Insider community! To see how this works, and several exciting updates to Windows Autopilot, check out this quick video:

 Source video.

You can head over to the Windows IT Pro blog right now for further details.

One final note: A big part of what we build is based on feedback from our customers. With this in mind, we also added several new Windows Autopilot capabilities into the Windows 10 April 2018 Update (version 1803) based on feedback, and these capabilities are also available today:

  • Enrollment Status page—We received tons of feedback from Windows Autopilot customers who want the ability to hold the device in the out-of-box setup experience until the configured policies and apps have been provisioned to the device. This enables IT admins to be assured the device is configured into a fully business-ready state prior to users getting to the desktop. This is made possible with a capability called “Enrollment Status” and is available today with Windows 10 April 2018 Update (version 1803) and Microsoft Intune.
  • Device vendor supply chain integration—We enabled Windows 10 OEMs and hardware vendors to integrate Windows Autopilot into their supply chain and fulfillment systems so that devices are registered in Windows Autopilot to your organization the moment your purchase is fulfilled. This makes the registration of Windows Autopilot devices completely hands-free and zero-touch for you as well as your device vendor/OEM. Contact your device reseller to find out if they are supporting Windows Autopilot.
  • Automatic Windows Autopilot profile assignment—We integrated Azure Active Directory (AD) dynamic groups with Windows Autopilot and Microsoft Intune to deliver a zero-touch experience for Windows Autopilot profile assignments on all Windows Autopilot devices.

I said this in my prior post and I’ll say it again—Windows Autopilot is an absolute game changer. I urge you to spend some time learning more about it.

To learn more about how to use Windows Autopilot and Co-Management together, check out this quick video.

*Requires network connection and TPM2.0.

Check configuration settings for SCVMM service templates

An incorrect deployment of SCVMM service templates could result in errors, but if you validate the configuration…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

before you deploy service templates in a production environment, you can avoid having to reconfigure them. Fortunately, there are multiple ways to validate service templates. For example, you could use SCVMM Manager or PowerShell cmdlets to ensure that your service templates are properly configured. Service templates exist to define the configuration of a service and can deploy a group of VMs together; the VMs in a service template might have the same configuration, or different configurations.

You can create and save service templates in the SCVMM Manager. The SCVMM Manager validates all settings and will issue a warning if it encounters any configuration errors. However, the SCVMM Manager only validates service templates that you are in the process of saving. PowerShell validates service templates you’ve already saved and displays errors or warnings in a PowerShell window. Use the Test-SCServiceTemplate cmdlet to test the service template configuration in PowerShell with this script:

$ThisTemplate = Get-SVServiceTemplate –Name “Tier3VMsTemplate”
$ErrorsOrWarnings = Test-SCServiceTemplate –ServiceTemplate $ThisTemplate
$ErrorsOrWarnings.ValidationErrors[0]

Note that Test-SCServiceTemplate supports the ValidationErros property, which stores any errors or warnings that the second PowerShell command reports. The [0] index of array stores all errors and warnings.

This PowerShell command only tests one service template at a time. If you want to validate configuration settings across all service templates, use the following script:

$AllVMTemplates = Get-SCServiceTemplate –Name *
ForEach ($Template in $AllVMTemplates)
{
$ThisTemplate = Get-SVServiceTemplate –Name $Template
$ErrorsOrWarnings = Test-SCServiceTemplate –ServiceTemplate $ThisTemplate
IF { $ErrorsOrWarnings.ValidationErrors.Count –eq 0)
{
Write-Host “VMM Service Template Error occurred: “ $ThisTemplate
}
}

The first PowerShell command collects and stores all service template names in the $AllVMTemplates variable and then ForEach statement loops through all templates and validates them one by one. The IF condition checks to make sure ValidationErrors.Count reports to 0. If it reports greater than 0, PowerShell prints the service template name on-screen with any errors or warnings encountered during the validation.

SCVMM is modular in design, so make sure that your IT staff uses the same SCVMM Manager to validate service templates before they start a deployment.

Next Steps

Back up and export SCVMM service templates

Update VM hosts with SCVMM Maintenance Mode

Learn about changes to Service Center 2016

Dig Deeper on Microsoft Hyper-V management

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever’s puzzling you.

Powered by WPeMatico