Tag Archives: change

Q&A: SwiftStack object storage zones in on AI, ML, analytics

SwiftStack founder Joe Arnold said the company’s recent layoffs reflected a change in its sales focus but not in its core object storage technology.

San Francisco-based SwiftStack attributed the layoffs to a switch in use cases from classic backup and archiving to newer artificial intelligence, machine learning and analytics. Arnold said the staffing changes had no impact on the engineering and support team, and the core product will continue to focus on modern applications and complex workflows that need to store lots of data.

“I’ve always thought of object storage as a data as a service platform more than anything else,” said Arnold, SwiftStack’s original CEO and current president and chief product officer.

TechTarget caught up with Arnold to talk about customer trends and the ways SwiftStack is responding in an increasingly cloud-minded IT world. Arnold unveiled product news about SwiftStack adding Microsoft Azure as a target for its 1space technology, which facilitates a single namespace between object storage locations for cloud platform compatibility. The company already supported Amazon S3 and Google.

SwiftStack’s storage software, which is based on open source OpenStack Swift, runs on commodity hardware on premises, but the 1space technology can run in the public cloud to facilitate access to public and private cloud data. Nearly all of SwiftStack’s estimated 125 customers have some public cloud footprint, according to Arnold.

Arnold also revealed a new distributed, multi-region erasure code option that can enable customers to reduce their storage footprint.

What caused SwiftStack to change its sales approach?

Joe Arnold, founder and president, SwiftStackJoe Arnold

Joe Arnold: At SwiftStack, we’ve always been focused on applications that are in the data path and mission critical to our customers. Applications need to generate more value from the data. People are distributing data across multiple locations, between the public cloud and edge data locations. That’s what we’ve been really good at. So, the change of focus with the go-to-market path has been to double down on those efforts rather than what we had been doing.

How would you compare your vision of object storage with what you see as the conventional view of object storage?

Arnold: The conventional view of object storage is that it’s something to put in the corner. It’s only for cold data that I’m not going to access. But, that’s not the reality of how I was brought up through object storage. My first exposure to object storage was building platforms versus Amazon Web Services when they introduced S3. We immediately began using that as the place to store data for applications that were directly in the data path.

Didn’t object storage tend to address backup and archive use cases because it wasn’t fast enough for primary workloads?

Arnold: I wouldn’t say that. Our customers are using their data for their applications. That’s usually a large data set that can’t be stored in traditional ways. Yes, we do have customers that use [SwiftStack] for purely cold archive and purely backup. In fact, we have features and capabilities to enhance some of the cold storage capabilities of the product. What we’ve changed is our go-to-market approach, not the core product.

So, for example, we’re adding a distributed, multi-region erasure code storage policy that customers can use across three data centers for colder data. It allows the entire segments of data — data bits and parity bits — to be distributed across multiple sites and, to retrieve data, only two of the data centers need to be online.

How does the new erasure code option differ from what you’ve offered in the past?

Arnold: Before, we offered the ability to use erasure code where each site could fully reconstruct the data. A data center could be offline, and you could still reconstruct fully. Now, with this new approach, you can store data more economically, but it requires two of three data centers to be online. It’s just another level of efficiency in our storage tier. Customers can distribute data across more data centers without using as much raw storage footprint and still have high levels of durability and availability. Since we’re building out storage workflows that tier up and down across different storage tiers, they can utilize this one for their most cold data storage policies.

Does the new erasure coding target users who strictly do archiving, or will it also benefit those doing AI and analytics?

Arnold: They absolutely need it. Data goes back and forth between their core data center, the edge and the public cloud in workflows such as autonomous vehicles, personalized medicine, telco and connected city. People need to manage data between different tiers as they’re evolving from more traditional-based applications into more modern, cloud-native type applications. And they need this ultra-cold tier.

How similar is this cold tier to Amazon Glacier?

Arnold: From a cost point of view, it will be similar. From a performance point of view, it’s much better. From a data availability point of view, it’s much better. It costs a lot of money to egress data out of something like AWS Glacier.

How important is flash technology in getting performance out of object storage?

Arnold: If the applications care about concurrency and throughput, particularly when it comes to a large data set, then a disk-based solution is going to satisfy their needs. Because the SwiftStack product’s able to distribute requests across lots of disks at the same time, they’re able to sustain the concurrency and throughput. Sure, they could go deploy a flash solution, but that’s going to be extremely expensive to get the same amount of storage footprint. We’re able to get single storage systems that can deliver a hundred gigabytes a second aggregate read-write throughput rates. That’s nearly a terabit of throughput across the cluster. That’s all with disk-based storage.

What do you think of vendors such as Pure Storage offering flash-based options with cheaper quad-level cell (QLC) flash that compares more favorably price-wise to disk?

Arnold: QLC flash is great, too. We support that as well in our product. We’re not dogmatic about using or not using flash. We’re trying to solve large-footprint problems of our customers. We do have customers using flash with a SwiftStack environment today. But they’re using it because they want reduced latencies across a smaller storage footprint.

How do you see demand for AWS, Microsoft and Google based on customer feedback?

Arnold: People want options and flexibility. I think that’s the reason why Kubernetes has become popular, because that enables flexibility and choice between on premises and the public cloud, and then between public clouds. Our customers were asking for the same. We have a number of customers focused on Microsoft Azure for their public cloud usage. And they want to be able to manage SwiftStack data between their on-premises environments with SwiftStack and the public cloud. So, we added the 1space functionality to include Azure.

What tends to motivate your customers to use the public cloud?  

Arnold: Some use it because they want to have disaster recovery ready to go up in the public cloud. We will mirror a set of data and use that as a second data center if they don’t already have one. We have customers that collect data from partners or devices out in the field. The data lands in the public cloud, and they want to move it to their on-premises environment. The other example would be customers that want to use the public cloud for compute resources where they need access to their data, but they don’t want to necessarily have long-term data storage in the public clouds. They want the flexibility of which public cloud they’re going to use for their computation and application runtime, and we can provide them connections to the storage environment for those use cases.

Do you have customers who have second thoughts about their cloud decisions due to egress and other costs?

Arnold: Of course. That happens in all directions. Sometimes you’re helping people move more stuff into the public cloud. In some situations, you’re pulling down data, or maybe it’s going in between clouds. They may have had a storage footprint in the public cloud that was feeding to some end users or some computation process. The egress charges were getting too high. The footprint was getting too high. And that costs them a tremendous amount month over month. That’s where we have the conversation. But it still doesn’t mean that they need to evacuate entirely from the public cloud. In fact, many customers will keep the storage on premises and use the public cloud for what it’s good at — more burstable computation points.

What’s your take on public cloud providers coming out with various on-premises options, such as Amazon Outposts and Azure Stack?

Arnold: It’s the trend of ‘everything as a service.’ I think what customers want is a managed experience. The number of operators who are able to manage these big environments is becoming harder and harder to come across. So, it’s a natural for those companies to offer a managed on-premises product. We feel the same way. We think that managing large sets of infrastructure needs to be highly automated, and we’ve built our product to make that as simple as possible. And we offer a product to do storage as a service on premises for customers who want us to do remote operations of their SwiftStack environments.

How has Kubernetes- and container-based development affected the way you design your product?

Arnold: Hugely. It impacts how applications are being developed. Kubernetes gives an organization the flexibility to deploy an application in different environments, whether that’s core data centers, bursting out into the public cloud or crafting applications out to the edge. At SwiftStack, we need to make the data just as portable as the containerized application is. That’s why we developed 1space. A huge number of our customers are using Kubernetes. That just naturally lends itself to the use of something like 1space to give them the portability they need for access to their data.

What gaps do you need to fill to more fully address what customers want to do?

Arnold: One is further flushing out ‘everything as a service.’ We just launched a service around that. As more customers adopt that, we’re going to have more work to do, as the deployments become more diverse across not just core data centers, but also edge data centers.

I see the convergence of file and object workflows and furthering 1space with our edge-to-core-to-cloud workflows. Particularly in the world of high-performance data analytics, we’re seeing the need for object — but it’s a world that is dominated by file-based applications. Data gets pumped into the system by robots, and object storage is awesome for that because it’s easy and you get lots of concurrency and lots of parallelism. However, you see humans building out algorithms and doing research and development work. They’re using file systems to do much of their programming, particularly in this high performance data analytics world. So, managing the convergence between file and object is an important thing to do to solve those use cases.

Go to Original Article

AIOps meaning to expand throughout DevOps chain

It seems that every year there’s a new record for the pace of change in IT, from the move from mainframe to client/server computing, to embracing the web and interorganizational data movements. The current moves that affect organizations are fundamental, and IT operations had better pay attention.

Cloud providers are taking over ownership of the IT platform from organizations. Organizations are moving to a multi-cloud hybrid platform to gain flexibility and the ability to quickly respond to market needs. Applications have started to transition from monolithic entities to composite architectures built on the fly in real time from collections of functional services. DevOps has affected how IT organizations write, test and deliver code, with continuous development and delivery relatively mainstream approaches.

These fundamental changes mean that IT operations managers have to approach the application environment in a new way. Infrastructure health dashboards don’t meet their needs. Without deep contextual knowledge of how the platform looks at an instant, and what that means for performance, administrators will struggle to address issues raised.

Enter AIOps platforms

AIOps means IT teams use artificial intelligence to monitor the operational environment and rapidly and automatically remediate any problems that arise — and, more to the point, prevent any issues in the first place.

True AIOps-based management is not easy to accomplish. It’s nearly impossible to model an environment that continuously changes and then also plot all the dependencies between hardware, virtual systems, functional services and composite apps.

AIOPs use cases

However, AIOps does meet a need. It is, as yet, a nascent approach. Many AIOps systems do not really use that much artificial intelligence; many instead rely on advanced rules and policy engines to automatically remediate commonly known and expected issues. AIOps vendors collect information on operations issues from across their respective customer bases to make the tools more useful.

Today’s prospective AIOps buyers must beware of portfolio repackaging — AIOps on the product branding doesn’t mean they use true artificial intelligence. Question the vendor carefully about how its system learns on the go, deals with unexpected changes and manages idempotency. 2020 might be the year of AIOps’ rise, but it might also be littered with the corpses of AIOps vendors that get things wrong.

AIOps’ path for the future

As we move through 2020 and beyond, AIOps’ meaning will evolve. Tools will better adopt learning systems to model the whole environment and will start to use advanced methods to bring idempotency — the capability to define an end result and then ensure that it is achieved — to the fore. AIOps tools must be able to either take input from the operations team or from the platform itself and create the scripts, VMs, containers, provisioning templates and other details to meet the applications’ requirements. The system must monitor the end result from these hosting decisions and ensure that not only is it as-expected, but that it remains so, no matter how the underlying platform changes. Over time, AIOps tools should extend so that business stakeholders also have insights into the operations environment.

Such capabilities will mean that AIOps platforms move from just operations environment tool kits to part and parcel of the overall BizDevOps workflows. AIOps will mean an overarching orchestration system for the application hosting environment, a platform that manages all updates and patches, and provides feedback loops through the upstream environment.

The new generation of AIOps tools and platforms will focus on how to avoid manual intervention in the operations environment. Indeed, manual interventions are likely to be where AIOps could fail. For example, an administrator who puts wrong information into the flow or works outside of the AIOps system to make any configuration changes could start a firestorm of problems. When the AIOps system tries to fix them, it will find that it does not have the required data available to effectively model the change the administrator has made.

2020 will see AIOps’ first baby steps to becoming a major tool for the systems administrator. Those who embrace the idea of AIOps must ensure that they have the right mindset: AIOps has to be the center of everything. Only in extreme circumstances should any action be taken outside of the AIOps environment.

The operations team must reach out to the development teams to see how their feeds can integrate into an AIOps platform. If DevOps tools vendors realize AIOps’ benefits, they might provide direct integrations for downstream workflows or include AIOps capabilities into their own platform. This trend could expand the meaning of AIOps to include business capabilities and security as well.

As organizations move to highly complex, highly dynamic platforms, any dependency on a person’s manual oversight dooms the deployment to failure. Simple automation will not be a workable way forward — artificial intelligence is a must.

Go to Original Article

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article

How to Resize Virtual Hard Disks in Hyper-V

We get lots of cool tricks with virtualization. Among them is the ability to change our minds about almost any provisioning decision. In this article, we’re going to examine Hyper-V’s ability to resize virtual hard disks. Both Hyper-V Server (2016+) and Client Hyper-V (Windows 10) have this capability.

An Overview of Hyper-V Disk Resizing

Hyper-V uses two different formats for virtual hard disk files: the original VHD and the newer VHDX. 2016 added a brokered form of VHDX called a “VHD Set”, which follows the same resize rules as VHDX. We can grow both the VHD and VHDX types easily. We can shrink VHDX files with only a bit of work. No supported way exists to shrink a VHD. Once upon a time, a tool was floating around the Internet that would do it. As far as I know, all links to it have gone stale.

You can resize any of Hyper-V’s three layout types (fixed, dynamically expanding, and differencing). However, you cannot resize an AVHDX file (a differencing disk automatically created by the checkpoint function).

Resizing a virtual disk file only changes the file. It does not impact its contents. The files, partitions, formatting — all of that remains the same. A VHD/X resize operation does not stand alone. You will need to perform additional steps for the contents.

Requirements for VHD/VHDX Disk Resizing

The shrink operation must occur on a system with Hyper-V installed. The tools rely on a service that only exists with Hyper-V.

If no virtual machine owns the virtual disk, then you can operate on it directly without any additional steps. Be aware that if a

If a virtual hard disk belongs to a virtual machine, the rules change a bit:

  • If the virtual machine is Off, any of its disks can be resized as though no one owned them
  • If the virtual machine is Saved or has checkpoints, none of its disks can be resized
  • If the virtual machine is Running, then there are additional restrictions for resizing its virtual hard disks

Special Requirements for Shrinking VHDX

Growing a VHDX doesn’t require any changes inside the VHDX. Shrinking needs a bit more. Sometimes, quite a bit more. The resize directions that I show in this article will grow or shrink a virtual disk file, but you have to prepare the contents before a shrink operation. We have another article that goes into detail on this subject.

Can I Resize a Hyper-V Virtual Machine’s Virtual Hard Disks Online?

A very important question: do you need to turn off a Hyper-V virtual machine to resize its virtual hard disks? The answer: sometimes.

  • If the virtual disk in question is the VHD type, then no, it cannot be resized online.
  • If the VM attached the disk in question to its virtual IDE chain, then no, you cannot resize the virtual disk while the virtual machine is online.
  • If the VM attached the disk in question to its virtual SCSI chain, then yes, you can resize the virtual disk while the virtual machine is online.

Resize a Hyper-V Virtual Machine's Virtual Hard Disks Online

Does Online VHDX Resize Work with Generation 1 Hyper-V VMs?

The generation of the virtual machine does not matter for virtual hard disk resizing. If the virtual disk is on the virtual SCSI chain, then you can resize it online.

Does Hyper-V Virtual Disk Resize Work with Linux Virtual Machines?

The guest operating system and file system do not matter. Different guest operating systems might react differently to a resize event, and the steps that you take for the guest’s file system will vary. However, the act of resizing the virtual disk does not change.

Do I Need to Connect the Virtual Disk to a Virtual Machine to Resize It?

Most guides show you how to use a virtual machine’s property sheet to resize a virtual hard disk. That might lead to the impression that you can only resize a virtual hard disk while a virtual machine owns it. Fortunately, you can easily resize a disconnected virtual disk. Both PowerShell and the GUI provide suitable methods.

How to Resize a Virtual Hard Disk with PowerShell

PowerShell is the preferred method for all virtual hard disk resize operations. It’s universal, flexible, scriptable, and, once you get the hang of it, much faster than the GUI.

The cmdlet to use is Resize-VHD:

The VHDX that I used in the sample began life at 20GB. Therefore, the above cmdlet will work as long as I did at least one of the following:

  • Left it unconnected
  • Connected it to a VM’s virtual SCSI controller
  • Turned the connected VM off

Notice the gb suffix on the SizeBytes parameter. PowerShell natively provides that feature; the cmdlet itself has nothing to do with it. PowerShell will automatically translate suffixes as necessary. Be aware that 1kb equals 1,024, not 1,000 (although both b and B both mean “byte”).

Had I used a number for SizeBytes smaller than the current size of the virtual hard disk file, I might have had some trouble. Each VHDX has a specific minimum size dictated by the contents of the file. See the discussion on shrinking at the end of this article for more information. Quickly speaking, the output of Get-VHD includes a MinimumSize field that shows how far you shrink the disk without taking additional actions.

This cmdlet only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that part in an upcoming section.

How to Resize a Disconnected Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager allows you to resize a virtual hard disk whether or not a virtual machine owns it.

  1. From the main screen of Hyper-V Manager, first, select a host in the left pane. All VHD/X actions are carried out by the hypervisor’s subsystems, even if the target virtual hard disk does not belong to a specific virtual machine. Ensure that you pick a host that can reach the VHD/X. If the file resides on SMB storage, delegation may be necessary.
  2. In the far right Actions pane, click Edit Disk.
    Resize a Disconnected Virtual Hard Disk with Hyper-V Manager
  3. The first page is information. Click Next.
  4. Browse to (or type) the location of the disk to edit.
    locate virtual hard disk
  5. The directions from this point are the same as for a connected disk, so go to the next section and pick up at step 6.

Note: Even though these directions specify disconnected virtual hard disks, they can be used on connected virtual disks. All of the rules mentioned earlier apply.

Altaro Dojo Forums
forums logo

Connect with fellow IT pros and master Hyper-V

Moderated by Microsoft MVPs

How to Resize a Virtual Machine’s Virtual Hard Disk with Hyper-V Manager

Hyper-V Manager can also resize virtual hard disks that are attached to virtual machines.

  1. If the virtual hard disk is attached to the VM’s virtual IDE controller, turn off the virtual machine. If the VM is saved, start it. If the VM has checkpoints, remove them.
  2. Open the virtual machine’s Settings dialog.
  3. In the left pane, choose the virtual disk to resize.
  4. In the right pane, click the Edit button in the Media block.
    Resize a Virtual Machine's Virtual Hard Disk with Hyper-V Manager
  5. The wizard will start by displaying the location of the virtual hard disk file, but the page will be grayed out. Otherwise, it will look just like the screenshot from step 4 of the preceding section. Click Next.
  6. Choose to Expand or Shrink the virtual hard disk. Shrink only appears for VHDXs or VHDSs, and only if they have unallocated space at the end of the file. If the VM is off, you will see additional options. Choose the desired operation and click Next.
    edit virtual hard disk wizard
  7. If you chose Expand, it will show you the current size and give you a New Size field to fill in. It will display the maximum possible size for this VHD/X’s file type. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    expand virtual hard diskIf you chose Shrink (VHDX only), it will show you the current size and give you a New Size field to fill in. It will display the minimum possible size for this file, based on the contents. All values are in GB, so you can only change in GB increments (use PowerShell if that’s not acceptable).
    shrink virtual hard disk
  8. Enter the desired size and click Next.
  9. The wizard will show a summary screen. Review it to ensure accuracy. Click Finish when ready.

The wizard will show a progress bar. That might happen so briefly that you don’t see it, or it may take some time. The variance will depend on what you selected and the speed of your hardware. Growing fixed disks will take some time; shrinking disks usually happens almost instantaneously. Assuming that all is well, you’ll be quietly returned to the screen that you started on.

This change only affects the virtual hard disk’s size. It does not affect the contained file system(s). We will cover that in the next sections.

Following Up After a Virtual Hard Disk Resize Operation

When you grow a virtual hard disk, only the disk’s parameters change. Nothing happens to the file system(s) inside the VHD/X. For a growth operation, you’ll need to perform some additional action. For a Windows guest, that typically means using Disk Management to extend a partition:

After a Virtual Hard Disk Resize Operation

Note: You might need to use the Rescan Disks operation on the Action menu to see the added space.

Of course, you could also create a new partition (or partitions) if you prefer.

Linux distributions have a wide variety of file systems with their own requirements for partitions and sizing. They also have a plenitude of tools to perform the necessary tasks. Perform an Internet search for your distribution and file system.

VHDX Shrink Operations

As previously mentioned, you can’t shrink a VHDX without making changes to the contained file system first. Review our separate article for steps.

What About VHD/VHDX Compact Operations?

I often see confusion between shrinking a VHD/X and compacting a VHD/X. These operations are unrelated. When we talk about resizing, then the proper term for reducing the size of a virtual hard disk is “shrink”. That changes the total allocated space of the contained partitions. “Compact” refers to removing the zeroed blocks of a dynamically expanding VHD/VHDX so that it consumes less space on physical storage. Compact makes no changes to the contained data or partitions. We have an article on compacting VHD/Xs that contain Microsoft file systems and another for compacting VHD/Xs with Linux file systems.

Note: this page was originally published in January 2018 and has been updated to be relevant as of December 2019.

Go to Original Article
Author: Eric Siron

Put infrastructure automation at the heart of modern IT ops

LAS VEGAS — Successful IT leaders know how to navigate change. In 2020 and beyond, that skill will almost certainly be put to the test.

To remain relevant and competitive in the years to come, enterprises need to embrace top IT trends around infrastructure automation, hybrid management tools and DevOps — despite the growing pains they’ll inevitably face along the way. How to do just that was the subject of discussions at Gartner’s IT Infrastructure, Operations & Cloud Strategies Conference here this month.

Automate, automate, automate

In search of greater agility, and faced with demands to do more with less, enterprises should increasingly lean on automation, from a business process standpoint and in IT infrastructure. 

“Automation is to modern infrastructure what blood is to the body,” said Dennis Smith, research vice president at Gartner, at the event. You simply can’t run one without the other, particularly as IT teams manage complex and heterogenous infrastructure setups.

At this point, the benefits of IT automation are well-established: a reduced risk of human error and an IT staff that has the time to work on more strategic, higher-level, involved tasks. But making the shift from manual to automated IT management practices in configuration management, capacity planning and other critical tasks can be tricky. And it won’t be easy to staff up with the requisite skill sets.

That’s a challenge for Keith Hoffman, manager of the data network at Sharp HealthCare, based in San Diego, Calif. “My team is made up of legacy network engineers, not programmers,” Hoffman said. “But we’ve talked about [the need for automation], and we see the writing on the wall.”

Hoffman’s team currently has some ad hoc instances of IT infrastructure automation via scripts, but the goal is to scale those efforts, particularly for automated configuration management. They’ve taken training courses on select automation and orchestration tools and turned to Sharp HealthCare’s internal DevOps team to learn and apply automation best practices.

To help close skill gaps, and to encourage broader IT automation efforts, enterprises should appoint a dedicated automation architect, said Ross Winser, senior director analyst at Gartner. The automation architect should help IT navigate the ways it can achieve infrastructure automation — infrastructure as code, site reliability engineering, a move away from traditional scripts in favor of AIOps practices — as well as the vast ecosystem of tools and vendors that support those efforts.

Hybrid IT management

Hybrid environments are a perennial IT trend. As these setups become even more complex, IT ops leaders need to further refine their infrastructure management practices.

Increasingly, one enterprise IT workload can span multiple infrastructure environments: dedicated data centers, managed hosting and colocation facilities, the public cloud and edge computing locations. While these hybrid environments offer deployment flexibility, they also complicate troubleshooting, incident response and other core IT operations tasks. When a failure occurs, the complex string of system connections and dependencies makes it difficult to pinpoint a cause.

“The ability to actually get to an answer is getting harder and harder,” Winser said.

Many operations teams still rely on disparate tool sets to monitor and manage resources in hybrid IT, adding to this complexity.  

“I think the challenge is having a common tool set,” said Kaushal Shah, director of infrastructure and information security at Socan, a performance rights organization for the music industry, based in Toronto. “What we do internally at the infrastructure layer is a lot of scripting and CLI-driven configurations, whereas in the cloud, every cloud provider has their own CLI.”

Shah — whose team runs IT infrastructure on premises and in AWS and Microsoft Azure — said he’s evaluating infrastructure-as-code tools like HashiCorp Terraform and Red Hat Ansible to “level the playing field” and provide a centralized way to manage resource configurations. Adoption poses a learning curve.

“[My team members] have a certain background and I think, for these tools, it’s a different mindset: You have to describe the state of the asset as opposed to how to configure it,” Shah said.

Gartner notes that many of these integrated management tool sets are still in the early days of truly centralized, end-to-end hybrid management capabilities. Infrastructure and operations teams should also use workflow visualization and dependency mapping, and create internal centers of excellence, to combat hybrid IT challenges.

Scalable DevOps

For many IT shops, DevOps implementation is a significant priority. For those that have a DevOps foundation in place, the next big challenge is to ensure it can scale.

It’s common practice for enterprises to get started with DevOps via pilot or incubation programs. And while there’s nothing wrong with this approach, it could impede IT leaders’ ability to enforce DevOps as a broad, cross-functional practice throughout the organization. A DevOps team, or even multiple DevOps teams, can sprout up and work as silos within the organization — perpetuating the very barriers DevOps sought to break down in the first place.

Shared self-service platforms are one way to ensure a DevOps practice can scale, remain responsive and span different functional groups, Winser said. Think of them as a digital toolbox from which DevOps teams, including infrastructure and operations practitioners, can access and explore tools relevant to their role. A shared tool set promotes consistency and common best practices throughout the organization.

In addition, site reliability engineers — who work with application developers to ensure highly reliable systems — can promote scalability within DevOps shops. Automation also plays a role.  

Socan’s Shah sees DevOps as an option to transition his team out of “fire-fighting mode” and into a position where they work hand in hand with developers and business stakeholders. DevOps, in combination with streamlined hybrid IT management and infrastructure automation, could bring the team out of a technology-focused mindset and onto a platform-ops approach, where developers interact with the infrastructure via API-driven interfaces as opposed to sending IT tickets to build things.

Go to Original Article

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article

For Sale – Zotac Geforce RTX2060, 2.5 months old, boxed as new

Change of plans so have removed the PC listing (see archive) and now just selling the GPU at present.


Bought 6th September 2019. Utterly perfect condition, literally removed from wrapper and fitted into PC case.

Balance of 5 year warranty, registered in my name so I’m happy to help if required. Absolutely no issues, works perfectly.

Pics show it still fitted to my PC along with the box.

Go to Original Article

Customize Excel and track notes in Outlook—here’s what’s new to Microsoft 365 in November

In today’s workplace, change is the new normal. To keep up, we all need to evolve and improve. Last month at the Ignite conference in Orlando, Florida, we announced a ton of Microsoft 365 innovations designed to put artificial intelligence (AI) and automation technologies to work for you. And we’ll continue to innovate across the Microsoft 365 experience, so our customers always have the best tools to navigate an increasingly distributed and fast-paced world. But to succeed at work today, organizations need more than great tools. They need to foster a culture of learning where their people can continue to develop essential skills. We want to help, so this month we introduced The Art of Teamwork toolkit, an interactive curriculum that uses the five attributes of the world’s most successful teams to help your team create and foster healthy team dynamics. We hope you’ll use it—and future educational support coming your way in 2020—to help your organization continue to succeed.

Let’s take a look at what else is new in November.

New features for personal productivity and collaboration

App updates to give you more choice and help you stay in the flow of work across devices and apps.

Keep track of Sticky Notes in Outlook on the web—Sticky Notes allows you to capture ideas, notes, and important info across the apps you already use. Now you can conveniently view, edit, and create notes directly in Outlook for the web, making it easier than ever to keep track of your notes as you go through email. Sticky Notes in Outlook for the web will begin rolling out next month to all users.

Animated image of Sticky Notes being used in Outlook on the web.

Switch to a darker OneNote canvas with Dark Mode—From complex travel schedules to killer meal plans, OneNote is like a second brain to help you track it all. So it should look the way you want. We’re excited to announce that a Dark Mode option is now rolling out for OneNote 2016. Using Dark Mode helps make both the product and your notes more legible, and can improve readability in low light environments, provide better contrast, and reduce eye strain. Dark Mode is available for all Office 365 subscribers and non-volume licensing Office 2019 customers.

Also, in response to feedback over the past year, we’re pleased to announce that we’re continuing mainstream support for OneNote 2016 beyond October 2020—so you can continue using the version of OneNote that works best for you.

Animated image of Dark Mode being used in OneNote.

Collaborate without disrupting a shared workbook with Sheet View—Earlier this month, we announced Sheet View in Excel, a new way of letting users create customized views without disrupting others, so collaboration is seamless. Sheet View allows users to sort and filter the data they need, and then select an option to make those changes visible just to themselves or to everyone working in the document. Once selecting to make changes just for yourself, that filter and sort will not affect other collaborators’ view of the workbook. All your cell level edits propagate through the file regardless of your view, so you can make all your edits right in your personal Sheet View. Sheet View is rolling out to all users using Excel on the web over the next few weeks.

Animated image of Sheet View being selected by an Excel user.

Upload files to Forms questions for added context—Sometimes you’d like respondents to a form to upload or attach files to provide important information or context when answering questions. Now Microsoft Forms enables you to allow users to include file uploads. With this new feature, you can easily create a resume collection form, a claim form, or a photography competition form. To get started, click the drop-down menu to add advanced question types and select File upload. Once you successfully add a file upload question, a folder will be automatically created in your OneDrive or SharePoint.

Animated image of a file being uploaded in Microsoft Forms.

The new Productivity Score, simplified licensing, and the latest Windows 10 release

New capabilities to help you transform workplace productivity, tap into the power of the cloud, and simplify licensing.

Transform how work gets done with insights from Microsoft Productivity Score—At Ignite, we announced Productivity Score to help deliver visibility into how your organization works. Productivity Score identifies where you can enable improved employee and technology experiences—so people can reach their goals, and actions to update skills and systems, so everyone can do their best work.

For example, Productivity Score can recommend user training around how to better collaborate as well as provide IT with documentation to configure external sharing and fine-tune policies, remove problem agents, or upgrade hardware to reduce friction. Join the private preview by filling out the form and see your score in the first week of December 2019.

Screenshot of Productivity Score in the Microsoft 365 admin center.

Leverage advanced security offerings with the U.S. Government Community Clouds—Earlier this month, we announced the general availability of Microsoft Cloud App Security and Azure Advanced Threat Protection (ATP) for U.S. Government GCC High customers. The release of these services delivers advanced security functionality for customers while enabling them to meet increased compliance and security standards. Eligible customers will need a GCC High account or an Azure Government account to purchase Microsoft Cloud App Security and/or Azure ATP licenses. To start a trial for either service within EMS E5, please work with your account team.

Simplified licensing for Windows 10 co-management—We’re bringing System Center Configuration Manager (ConfigMgr) and Microsoft Intune together in a new, unified product called Microsoft Endpoint Manager that delivers a seamless, end-to-end management solution without the complexity of a migration or disruption. We’re also excited to announce that the simplified licensing makes Microsoft Intune user licenses available to ConfigMgr customers to co-manage their existing Windows 10 PCs. The change in licensing terms are expected to go into effect in early December 2019.

Announcing Microsoft Endpoint Manager

Learn how we’re integrating Microsoft Intune, Configuration Manager, and more into a single solution called Microsoft Endpoint Manager.

Watch the video

Get the latest version of Windows 10—Windows 10 version 1909 is now available—offering new capabilities and enhancements, intelligent security, simplified updates, flexible management, and enhanced productivity. Highlights include the new Windows Search experience in Explorer, the new cloud clipboard with history viewing, support for third-party digital assistants, processor enhancements, additional customization for kiosk mode, and more. Version 1909 is rolling out now for consumers and IT admins.

As always, everything we create for Microsoft 365 is designed to help you and your organization achieve more by being more productive. Over the last 12 months, we worked hard to build an increasingly seamless experience that uses AI and automation to help you collaborate across platforms, streamline your workflow, harness organizational knowledge, and stay ahead of ever-evolving security threats.

We look forward to bringing you so much more innovation and educational tools in the year to come. Equipped with incredible tech and the right educational support, there’s no end to what you can achieve.

Go to Original Article
Author: Microsoft News Center