Tag Archives: host

Benefits of virtualization highlighted in top 10 stories of 2019

When an organization decides to pursue a virtual desktop solution, a host of questions awaits it.

Our most popular virtual desktop articles this year highlight that fact and show how companies are still trying to get a handle on the virtual desktop infrastructure terrain. The stories explain the benefits of virtualization and provide comparisons between different enterprise options.

A countdown of our most-read articles, determined by page views, follows.

  1. Five burning questions about remote desktop USB redirection

Virtual desktops strive to mimic the traditional PC experience, but using local USB devices can create a sticking point. Remote desktop USB redirection enables users to attach their devices to their local desktop and have it function normally. In 2016, we explored options for redirection, explained how the technology worked and touched upon problem areas such as how scanners are infamously problematic with redirection.

  1. Tips for VDI user profile management

Another key factor for virtualizing the local desktop experience includes managing things like a user’s browser bookmarks, desktop background and settings. That was the subject of this FAQ from 2013 and our ninth most popular story for 2019. The article outlines options for managing virtual desktop user profiles, from implementing identical profiles for everyone to ensuring that settings once saved locally carry over to the virtual workspace.

  1. VDI hardware comparison: Thin vs. thick vs. zero clients

The push toward centralizing computing services has created a market for thin and zero clients, simple and low-cost computing devices reliant on servers. In implementing VDI, IT professionals should consider the right option for their organization. Thick clients, the traditional PC, provide proven functionality, but they also sidestep some of the biggest benefits of virtualization such as lower cost, energy efficiency and increased security. Thin clients provide a mix of features, and their simplicity brings VDI’s assets, such as centralized management and ease of local deployment, to bear. Zero clients require even less configuration, as they have nothing stored locally, but they tend to be proprietary.

  1. How to troubleshoot remote and virtual desktop connection issues

Connection issues can disrupt employee workflow, so avoiding and resolving them is paramount for desktop administrators. Once the local hardware has been ruled out, there are a set of common issues — exceeded capacity, firewalls, SSL certificates and network-level authentication — that IT professionals can consider when solving the puzzle.

  1. Comparing converged vs. hyper-converged infrastructure

What’s the difference between converged infrastructure (CI) and hyper-converged infrastructure (HCI)? This 2015 missive took on that question in our sixth most popular story for 2019. In short, while CI houses four data center functions — computing, storage, networking and server virtualization — into a single chassis, HCI looks to add even more features through software. HCI’s flexibility and scalability were touted as advantages over the more hardware-focused CI.

  1. Differences between desktop and server virtualization

To help those seeking VDI deployment, this informational piece from 2014 focused on how desktop virtualization differs from server virtualization. Server virtualization partitions one server into many, enabling organizations to accomplish tasks like maintaining databases, sharing files and delivering media. Desktop virtualization, on the other hand, delivers a virtual computer environment to a user. While server virtualization is easier to predict, given its uniform daily functions, a virtual desktop user might call for any number of potential applications or tasks, making the distinction between the two key.

  1. Application virtualization comparison: XenApp vs. ThinApp vs. App-V

This 2013 comparison pitted Citrix, VMware and Microsoft’s virtualization services against each other to determine the best solution for streaming applications. Citrix’s XenApp drew plaudits for the breadth of the applications it supported, but its update schedule provided only a short window to migrate to newer versions. VMware ThinApp’s portability was an asset, as it did not need installed software or device drivers, but some administrators said the service was difficult to deploy and the lack of a centralized management platform made handling applications trickier. Microsoft’s App-V provided access to popular apps like Office, but its agent-based approach limited portability when compared to ThinApp.

  1. VDI shops mull XenDesktop vs. Horizon as competition continues

In summer 2018, we took a snapshot of the desktop virtualization market as power players Citrix and VMware vied for a greater share of users. At the time, Citrix’s product, XenDesktop, was used in 57.7% of on-premises VDI deployments, while VMware’s Horizon accounted for 26.9% of the market. Customers praised VMware’s forward-facing emphasis on cloud, while a focus on security drew others to Citrix. Industry watchers wondered if Citrix would maintain its dominance through XenDesktop 7.0’s end of life that year and if challenger VMware’s vision for the future would pay off.

  1. Compare the top vendors of thin client systems

Vendors vary in the types of thin client devices they offer and the scale they can accommodate. We compared offerings from Advantech, Asus, Centerm Information, Google, Dell, Fujitsu, HP, Igel Technology, LG Electronics, Lenovo, NComputing, Raspberry Pi, Samsung, Siemens and 10ZiG Technology to elucidate the differences between them, and the uses for which they might be best suited.

  1. Understanding nonpersistent vs. persistent VDI

This article from 2013 proved some questions have staying power. Our most popular story this year explained the difference between two types of desktops that can be deployed on VDI. Persistent VDI provides each user his or her own desktop, allowing more flexibility for users to control their workspaces but requiring more storage and heightening complexity. Nonpersistent VDI did not save settings once a user logged out, a boon for security and consistent updates, but less than ideal in providing easy access to needed apps.

Go to Original Article
Author:

SageMaker Studio makes model building, monitoring easier

LAS VEGAS — AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS’ cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform.

In addition to SageMaker Studio, the IDE for platform for building, using and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable.

During a keynote presentation at the AWS re:Invent 2019  conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks and Debugger.

“SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML [machine learning] lifecycle and to support teams,” said Mike Gualtieri, an analyst at Forrester.

New tools

SageMaker Studio, Jassy claimed, is a “fully-integrated development environment for machine learning.” The new platform pulls together all of SageMaker’s capabilities, along with code, notebooks and datasets, into one environment. AWS intends the platform to simplify SageMaker, enabling users to create, deploy, monitor, debug and manage models in one environment.

Google and Microsoft have similar machine learning IDEs, Gualtieri noted, adding that Google plans for its IDE to be based on DataFusion, its cloud-native data integration service, and to be connected to other Google services.

SageMaker Notebooks aims to make it easier to create and manage open source Jupyter notebooks. With elastic compute, users can create one-click notebooks, Jassy said. The new tool also enables users to more easily adjust compute power for their notebooks and transfer the content of a notebook.

Meanwhile, SageMaker Experiments automatically captures input parameters, configuration and results of developers’ machine learning models to make it simpler for developers to track different iterations of models, according to AWS. Experiments keeps all that information in one place and introduces a search function to comb through current and past model iterations.

AWS CEO Andy Jassy talks about new Amazon SageMaker capabilitiesatre:Invent 2019
AWS CEO Andy Jassy talks about new Amazon SageMaker capabilities at re:Invent 2019

“It is a much, much easier way to find, search for and collect your experiments when building a model,” Jassy said.

As the name suggests, SageMaker Debugger enables users to debug and profile their models more effectively. The tool collects and monitors key metrics from popular frameworks, and provides real-time metrics about accuracy and performance, potentially giving developers deeper insights into their own models. It is designed to make models more explainable for non-data scientists.

SageMaker Model Monitor also tries to make models more explainable by helping developers detect and fix concept drift, which refers to the evolution of data and data relationships over time. Unless models are updated in near real time, concept drift can drastically skew the accuracy of their outputs. Model Monitor constantly scans the data and model outputs to detect concept drift, alerting developers when it detects it and helping them identify the cause.

Automating model building

With Amazon SageMaker Autopilot, developers can automatically build models without, according to Jassy, sacrificing explainability.

Autopilot is “AutoML with full control and visibility,” he asserted. AutoML essentially is the process of automating machine learning modeling and development tools.

The new Autopilot module automatically selects the correct algorithm based on the available data and use case and then trains 50 unique models. Those models are then ranked by accuracy.

“AutoML is the future of ML development. I predict that within two years, 90 percent of all ML models will be created using AutoML by data scientists, developers and business analysts,” Gualtieri said.

SageMaker Autopilot is a must-have for AWS.
Mike GualtieriAnalyst, Forrester

“SageMaker Autopilot is a must-have for AWS, but it probably will help” other vendors also, including such AWS competitors as DataRobot because the AWS move further legitimizes the automated machine learning approach, he continued.

Other AWS rivals, including Google Cloud Platform, Microsoft Azure, IBM, SAS, RapidMiner, Aible and H2O.ai, also have automated machine learning capabilities, Gualtieri noted.

However, according to Nick McQuire, vice president at advisory firm CCS Insight, some of the  new AWS capabilities are innovative.

“Studio is a great complement to the other products as the single pane of glass developers and data scientists need and its incorporation of the new features, especially Model Monitor and Debugger, are among the first in the market,” he said.

“Although AWS may appear late to the game with Studio, what they are showing is pretty unique, especially the positioning of the IDE as similar to traditional software development with … Experiments, Debugger and Model Monitor being integrated into Studio,” McQuire said. “These are big jumps in the SageMaker capability on what’s out there in the market.”

Google also recently released several new tools aimed at delivering explainable AI, plus a new product suite, Google Cloud Explainable AI.

Go to Original Article
Author:

Tableau analytics platform upgrades driven by user needs

LAS VEGAS — Tableau revealed a host of additions and upgrades to the Tableau analytics platform in the days both before and during Tableau Conference 2019.

Less than a week before its annual user conference, the vendor released Tableau 2019.4, a scheduled update of the Tableau analytics platform. And during the conference, Tableau unveiled not only new products and updates to existing ones, but also an enhanced partnership with Amazon Web Services to help users move to the cloud and a new partner network.

Many of the additions to the Tableau analytics platform have to do with data management, an area Tableau only recently began to explore. Among them are Tableau Catalog and Prep Conductor.

Others, meanwhile, are centered on augmented analytics, including Ask Data and Explain Data.

All of these enhancements to the Tableau analytics platform come in the wake of the news last June that Tableau was acquired by Salesforce, a deal that closed on Aug. 1 but was held up until just last week by a regulatory review in the United Kingdom looking at what effect the combination of the two companies would have on competition.

In a two-part Q&A, Andrew Beers, Tableau’s chief technology officer, discussed the new and enhanced products in the Tableau analytics platform as well as how Tableau and Salesforce will work together.

Part I focuses on data management and AI products in the Tableau analytics platform, while Part II centers on the relationship between Salesforce and Tableau.

Data management has been a theme of new products and upgrades to the Tableau analytics platform — what led Tableau in that direction?

Andrew BeersAndrew Beers

Andrew Beers: We’ve been about self-service analysis for a long time. Early themes out of the Tableau product line were putting the right tools in the hands of the people that were in the business that had the data and had the questions, and didn’t need someone standing between them and getting the answers to those questions. As that started to become really successful, then you had what happens in every self-service culture — dealing with all of this content that’s out there, all of this data that’s out there. We helped by introducing a prep product. But then you had people that were generating dashboards, generating data sets, and then we said, ‘To stick to our belief in self-service we’ve got to do something in the data management space, so what would a user-facing prep solution look like, an operationalization solution look like, a catalog solution look like?’ And that’s what started our thinking about all these various capabilities.

Along those lines, what’s the roadmap for the next few years?

Beers: We always have things that are in the works. We are at the beginning of several efforts — Tableau Prep is a baby product that’s a year and a half old. Conductor is just a couple of releases old. You’re going to see a lot of upgrades to those products and along those themes — how do you make prep easier and more approachable, how do you give your business the insight into the data and how it is being used, and how do you manage it? That’s tooling we haven’t built out that far yet. Once you have all of this knowledge and you’ve given people insights, which is a key ingredient in governance along with how to manage it in a self-service way, you’ll start to see the Catalog product grow into ideas like that.

Are these products something customers asked for, or are they products Tableau decided to develop on its own?

Beers: It’s always a combination. From the beginning we’ve listened to what our customers are saying. Sometimes they’re saying, ‘I want something that looks like this,’ but often they’re telling us, ‘Here is the kind of problem we’re facing, and here are the challenges we’re facing in our organization,’ and when you start to hear similar stories enough you generalize that the customers really need something in this space. And this is really how all of our product invention happens. It’s by listening to the intent behind what the customer is saying and then inventing the products or the new capabilities that will take the customer in a direction we think they need to go.

Shifting from data management to augmented intelligence, that’s been a theme of another set of products. Where did the motivation come from to infuse more natural language processing and machine learning into the Tableau analytics platform?

Beers: It’s a similar story here, just listening to customers and hearing them wanting to take the insights that their more analyst-style users got from Tableau to a larger part of the organization, which always leads you down the path of trying to figure out how to add more intelligence into the product. That’s not new for Tableau. In the beginning we said, ‘We want to build this tool for everyone,’ but if I’m building it for everyone I can’t assume that you know SQL, that you know color design, that you know how to tell a good story, so we had to build all those in there and then let users depart from that. With these smart things, it’s how can I extend that to letting people get different kinds of value from their question. We have a researcher in the NLP space who was seeing these signals a while ago and she started prototyping some of these ideas about how to bring natural language questioning into an analytical workspace, and that really inspired us to look deeply at the space and led us to think about acquisitions..

What’s the roadmap for Tableau’s AI capabilities?

With the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.
Andrew BeersChief technology officer, Tableau

Beers: You’re going to see these AI and machine learning-style capabilities really in every layer of the product stack we have. We showed two [at the conference] — Ask Data and Explain Data — that are very much targeted at the analyst, but you’ll see it integrated into the data prep products. We’ve got some smarts in there already. We’ve added Recommendations, which is how to take the wisdom of the crowd, of the people that are at your business, to help you find things that you wouldn’t normally find or help you do operations that you yourself haven’t done yet but that your community around have done. You’re going to see that all over the product in little ways to make it easier to use and to expand the kinds of people that can do those operations.

As a technology officer, how fun is this kind of stuff for you?

Beers: It’s really exciting. It’s all kinds of fun things that we can do. I’ve always loved the mission of the company, how people see and understand data, because we can do this for decades. There’s so much interesting work ahead of us. As someone who’s focused on the technology, the problems are just super interesting, and I think with the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.

Go to Original Article
Author:

Vancouver Canucks defend data with Veeam backup

As host of the ice hockey events at the 2010 Winter Olympics, Aquilini Investment Group, owner of Rogers Arena and the Vancouver Canucks, had to rethink its entire IT game plan.

Rogers Arena has a capacity of around 18,000 people, and its IT infrastructure had to ensure all ticket scanners, Wi-Fi and point-of-sale systems would never go down during the heavy influx of attendees. In 2010, Aquilini revamped its legacy systems, moving away from physical servers and tape to virtualization and VM backup. It deployed VMware and Veeam backup.

“We were starting to see the serious benefits of virtualization compared to traditional physical servers,” said Olly Prince, manager of infrastructure at Canucks Sports & Entertainment and Aquilini Group.

The switch dramatically changed how the Canucks handled backup. Prince described the old system as “hit-or-miss.” Backup copies of data were stored on tapes that were then sent to an off-site facility. When a user needed something restored, the correct tape had to be found and then delivered back to the data center for restoration. The whole process took four or five business days, and there was no guarantee that the restoration would succeed.

With Veeam backup, Prince said, he’s now able to restore data in 10 minutes.

Cloud considerations hinge on cost

As part of the IT revamp, Aquilini has been looking at the cloud more closely, but has only dipped a toe in. So far, there is a single test/dev workload deployed on AWS that isn’t being backed up because of how inconsequential it is. Prince had conducted a cost analysis and found that it’s still cheaper to run most workloads in VMs on premises.

Headshot of Olly PrinceOlly Prince

However, Aquilini wants to dive deeper into cloud. Some of the ways the company wants to take advantage of the cloud are disaster recovery (DR), Office 365 backup and to give coaches a way to upload videos or access useful player metadata while they are on the road. Right now, the last option is being achieved by having the team carry a “travel server” with them wherever they go.

“We’re looking at everything as a whole and strategizing what makes sense for our organization to do on cloud or on prem,” said Margaret Pawlak, IT business strategy and project manager at Aquilini Group, Canucks Sports & Entertainment.

Headshot of Margaret PawlakMargaret Pawlak

Aquilini recently finished a proof of concept with Microsoft Azure for DR. Prince said he was able to replicate on-premises applications and run them on the cloud, but the next step is factoring in costs. The company’s current DR plan involves replicating and failing over to an off-site facility about 60 kilometers away from the main data center. That site also houses its own separate production environment, so while it has enough storage to bring enough VMs back online to keep the business running, it won’t include absolutely everything.

Although Pawlak and Prince said they’re actively working on pushing some of these cloud strategies, they’re having difficulty convincing the rest of the organization that changes are necessary.

Horror stories don’t get you a [cloud backup] budget.
Olly PrinceManager of infrastructure, Canucks Sports & Entertainment

In the case of Office 365 backup, there is a pervasive myth that its native long-term retention policy is a suitable replacement for true, point-in-time backup. Prince pointed out that retention doesn’t help when trying to restore a corrupted or deleted file.

In the case of DR, Pawlak said it is hard to put a business case forward for what is essentially insurance. The benefit is not something tangible until a real disaster hits, and there’s a belief that such an event will never actually happen. Prince said it’s a difficult attitude to overcome until it’s too late — no matter how many times he’s shared IT horror stories from his peers in the industry.

“Horror stories don’t get you a budget,” Prince said.

Backup strategies beyond the rink

Prince’s team of four IT personnel, himself included, is responsible not just for the Canucks franchise and Rogers Arena, but for hotels, wineries and other properties owned by Aquilini Group. A total of 180 TB from 60 VMware VMs are being protected by Veeam backup. Aside from the daily business data generated by Rogers Arena, some of the VMs also house audio and visual data, as well as player performance metadata that the Canucks franchise uses for scouting, training and coaching.

Aquilini uses Darktrace for cyberdefense, but Prince focuses much of his attention on user training as well. He said ransomware is more likely to get through unaware staff than through vulnerabilities in devices or workstations they use, so he trains them on how to spot phishing and avoid executing programs they’re unsure of. A good backup system is also an important part of the overall security package.

Aquilini would not comment on other data protection vendors that were considered besides Veeam, but Prince said ease of deployment and use were huge factors in the decision, given how small his IT staff is.

Prince said he wants Veeam to work natively with Azure cold storage, which it currently doesn’t. On top of certain files that need to be retained for compliance reasons, the Canucks franchise has a large amount of audio and visual files that need to be archived for potential future use. Not all the footage is mission-critical, but some clips might be useful for pulling together a promotional video.

“It would be nice to take a backup of that and shove it somewhere cheap,” Prince said.

Go to Original Article
Author:

For Sale – 4U Silenced Custom Server

Here I have for sale my old lab ESXi host, which after a house move is now surplus to requirements.

The host itself is comprised of the following components:

– X-Case 4u Case
– Supermicro X9SRL-F Motherboard
– E5-2670 Xeon Processor (8 Core @ 2.60Ghz)
– 16GB (2 x 8GB Dimm) 1333Mhz Registered ECC Memory
– Noctua NH-U9DX i4 Xeon Cooler
– Corsair G650M Power Supply

Not latest tech by any stretch of the imagination, but certainly more than enough for the majority of home use cases.

Price and currency: £200
Delivery: Delivery cost is not included
Payment method: BACS / PPG
Location: Newbury
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

For Sale – 4U Silenced Custom Server

Here I have for sale my old lab ESXi host, which after a house move is now surplus to requirements.

The host itself is comprised of the following components:

– X-Case 4u Case
– Supermicro X9SRL-F Motherboard
– E5-2670 Xeon Processor (8 Core @ 2.60Ghz)
– 16GB (2 x 8GB Dimm) 1333Mhz Registered ECC Memory
– Noctua NH-U9DX i4 Xeon Cooler
– Corsair G650M Power Supply

Not latest tech by any stretch of the imagination, but certainly more than enough for the majority of home use cases.

Price and currency: £200
Delivery: Delivery cost is not included
Payment method: BACS / PPG
Location: Newbury
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

NVMe arrays power Plex MRP cloud

Cloud ERP provider Plex Systems requires a storage setup that can host hundreds of petabytes, while meeting high thresholds for performance and availability. The software-as-a-service provider is in its final year of a storage transition in which it added NVMe arrays for performance and two additional data centers for high availability.

Plex has been running a cloud for 19 years, since its 2001 inception. It started as a multi-tenant application run through a browser for customers.

“We’ve always been a cloud to manufacturers,” said Todd Weeks, group vice president of cloud operations and chief security officer for Plex. “We’ve been 100% cloud-based to our customers.”

“It looks like a public cloud to our customers, but we see it as a private cloud,” he continued. “It’s not running in Azure, AWS or Google. It’s our own managed cloud.”

The Plex private cloud runs mainly on Microsoft software, including SQL Server, and Dell EMC storage, including PowerMax all-NVMe arrays.

Scaling out with two new data centers

Weeks said Plex’s capacity from customer data grows from 15% to 25% per year. He said it has more than 200 PB of data for about 700 customers and 2,300 worldwide manufacturing sites, and it processes more than 7 billion transactions a day with 99.998% availability.

Todd Weeks, group vice president of cloud operations and chief security officer at Plex SystemsTodd Weeks

“With the growth of our company, we wanted a much better scale-out model, which we have with our two additional data centers,” he said. “Then, we said, ‘Besides just scaling out, is there more we can get out of them around availability, reliability and performance?'”

The company, based in Troy, Mich., has storage spread among data centers in Auburn Hills, Mich.; Grand Rapids, Mich.; Denver; and Dallas. The data centers are split into redundant pairs for failover, with primary storage and backup running at all four.

Weeks said Plex has used a variety of storage arrays, including ones from Dell EMC, Hitachi Vantara and NetApp. Plex is in the final year of a three-year process of migrating all its storage to Dell EMC PowerMax 8000 NVMe arrays and VxBlock converged infrastructure that includes VMAX and XtremIO all-flash arrays.

Two data centers have Dell EMC PowerMax, and the other two use Dell EMC VxBlock storage as mirrored pairs. Backup consists of Dell EMC Avamar software and Data Domain disk appliances.

“If we lose one, we fail over to the other,” Weeks said of the redundant data centers.

The performance advantage

Weeks said switching to the new storage infrastructure provided a “dramatic increase in performance,” both for primary and backup data. Backup restores have gone from hours to less than 30 minutes, and read latency has been at least three times faster, he said. Data reduction has also significantly increased, which is crucial with hundreds of petabytes of data under management.

“The big win we noticed was with PowerMax. We were expecting a 3-to-1 compression advantage from Hitachi storage, and we’ve actually seen a 9-to-1 difference,” he said. “That allows us to scale out more efficiently. We’ve bought ourselves a couple of years of extra growth capacity. We always want to stay ahead of our customers’ needs, and our customers are database-heavy. We’re also making sure we’re a couple of years ahead of where we need to be performance-wise.”

Early all-flash arrays

Plex’s introduction to EMC storage came through XtremIO all-flash arrays. While performance was the main benefit of those early all-flash systems, Weeks said, the XtremIO REST API impressed his team.

“Being able to call into [the API] made it much more configurable,” he said. “Our storage engineers said, ‘This makes my job easier.’ It’s much easier than having to script and do everything yourself. It makes it much easier to implement and deploy.”

Weeks said Plex is reluctant to move data into public clouds because of the fees incurred for data transfers. But it does store machine information gathered from the Plex industrial IoT (IIoT) SaaS product on Microsoft Azure.

“We gather plant floor machine information and tie it into our ERP,” he said. “But we don’t use public clouds for archiving or storage.”

Plex’s IT roadmap includes moving to containerized applications, mainly to support the Plex IIoT service.

“We’re looking now at how we can repackage our application,” he said. “We’re just beginning to go in the direction of microservices and containers.”

Go to Original Article
Author:

For Sale – 4U Silenced Custom Server

Here I have for sale my old lab ESXi host, which after a house move is now surplus to requirements.

The host itself is comprised of the following components:

– X-Case 4u Case
– Supermicro X9SRL-F Motherboard
– E5-2670 Xeon Processor (8 Core @ 2.60Ghz)
– 16GB (2 x 8GB Dimm) 1333Mhz Registered ECC Memory
– Noctua NH-U9DX i4 Xeon Cooler
– Corsair G650M Power Supply

Not latest tech by any stretch of the imagination, but certainly more than enough for the majority of home use cases.

Price and currency: £200
Delivery: Delivery cost is not included
Payment method: BACS / PPG
Location: Newbury
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

What is Azure Bastion?

In this post, you’ll get a short introduction into Azure Bastion Host. To be honest, I still don’t know if I should pronounce it as [basˈti̯oːn] (German), /bæstʃən/ (US engl.) or [basˈt̪jõn] (french) but that shouldn’t stop us from learning more about Azure Bastion Host, what is it, and when it’s useful.

So let’s start.

What is Azure Bastion Host?

Azure Bastion Host is a Jump-server as a Service within an Azure vNet (note that this service is currently in preview). What does that mean exactly? Well, a jump server is a fixed point on a network that is the sole place for you to remote in, get to other servers and services, and manage the environment. Now some will say, but I build my own jump server VM myself! While you’re certainly free to do that yourself, there are some key differences between the self-built VM option and a Bastion Host.

A regular Jump-server VM must either be reachable via VPN or needs to have a public IP with RDP and/or SSH open to the Internet. Option one, in some environments, is rather complex. Option two is a security nightmare. With Azure Bastion Host, you can solve this access issue. Azure Bastion enables you to use RDP and SSH via the Internet or (if available) via a VPN using the Azure Portal. The VM does not need a public IP, which GREATLY increases security for the target machine.

NOTE: Looking for more great content on security? Watch our webinar on Azure Security Center On-Demand.

After the deployment (which we’ll talk about in a second), Bastion becomes the 3rd option when connecting to a VM through the Azure Portal, as shown below.

Bastion

Virtual Machine Bastion

After you hit connect, an HTTPs browser Window will open and your session will open within an SSL encrypted Window.

Bastion in browser

Azure Bastion Use Cases

Now let’s list some possible use-cases. Azure Bastion can be very useful (but not limited) to these scenarios:

  1. Your Azure-based VMs are running in a subscription where you’re unable to connect via VPN, and for security reasons, you cannot set up a dedicated Jump-host within that vNet.
  2. The usage of a Jump-host or Terminal Server in Azure would be more cost-intensive than using a Bastion Host within the VNet (e.g. when you have more than one admin or user working on the host at the same time.)
  3. You want to give developers access to a single VM without giving them access to additional services like a VPN or other things running within the VNet.
  4. You want to implement Just in Time (JIT) Administration in Azure. You can deploy and enable Bastion Host on the fly and as you need it. This allows you yo implement it as part of your Operating System Runbook when you need to maintain the OS of an Azure-based VM. Azure Bastion allows you to do this without setting up permanent access to the VM.

The way you deploy Azure Bastion Host within a VNet is pretty straightforward. Let’s go through the steps together.

  1. Open the Azure Preview Portal through the following link.
  2. Search for the feature in the Azure Marketplace and walk through the deployment wizard by filling out the fields shown below.

create a bastion

Again, the deployment is quite simple and most options are fairly well explained within the UI. However, if you want further details, you can find them in the official feature documentation here.

Also, be aware that a Bastion Host must be implemented in every vNet where you want to connect to a VM. Currently, Bastion does not support vNet Peering.

How Much Does Azure Bastion Cost?

Pricing for Bastion is pretty easy to understand. As all Microsoft VM Services, you pay for the time the Bastion hast is deployed and for any Bastion service you have deployed. You can easily calculate the costs for the Bastions Hosts you need via Azure Price Calculator.

I made my example for one Bastion Host in West Europe, with the assumption it would be needed all month long.

Azure Bastion Price Calculator

Bastion Roadmap Items

Being in preview there are still a number of things that Microsoft is adding to Bastion’s feature set. This includes things like:

  1. Single-Sign-On with Azure AD
  2. Multi-Factor Auth
  3. vNet Peering (Not confirmed, but being HEAVILY requested by the community right now)

vNet Peering support would make it so that only a single Bastion Host in a Hub or Security vNet is needed.

You can see additional feature request or submit your own via the Microsoft Feedback Forum.

If you like a feature request or want to push your own request, keep an eye on the votes. The more votes a piece of feedback has, the more likely Microsoft will work on the feature. 

Additional Documentation and Wrap-Up

Additional documentation can be found on the Azure Bastion Sales Page.

Finally, I’d like to wrap up by finding out what you think of Azure Bastion. Do you think this is a worthy feature? Is this something that you’ll be putting into production once the feature is out of preview? Any issues you currently see with it today? Let us know in the comments section below!

Finally, if you’re interested in learning more about Azure security issues why not watch our webinar session on Azure Security Center? Presented by Thomas Maurer from the Azure Engineering Team, you will learn all about this important security package and how you should be using it to ensure your Azure ecosystem is fully protected!

Azure Security Center Webinar

Watch the Webinar

Thanks for reading!

Go to Original Article
Author: Florian Klaffenbach

Introducing Evernote for Microsoft Teams

Over the years, Evernote has made teamwork easier by building integrations with a host of powerful apps, including Microsoft Outlook, Salesforce, Google Drive, Slack, and many others. Today we’re pleased to add another big name to that list.

Introducing Evernote for Microsoft Teams

Microsoft Teams is the communication hub for productive companies, where teams can chat, share messages, and move projects forward. As part of the Office 365 suite, it enables colleagues to share emails, documents, spreadsheets, and presentations, and manage the flow of information.

Our latest integration brings Evernote into the context of your conversations in Teams so you can easily reference specific notes within a conversation, and access notes without having to leave the Teams experience.

With Evernote for Microsoft Teams, you can seamlessly share, pin, edit, and search your Evernote content—right from the Microsoft Teams app. This helps you work without interruption and keeps everyone on the same page.

We sat down with Mansoor Malik, Principal Product Manager for Microsoft Teams, and Leo Gong, Senior Product Manager at Evernote, to get their thoughts on this new integration. We asked them why the partnership between Evernote and Microsoft is so exciting, and what it means for customers and the future of teamwork.

Q: What does integrating with Evernote bring to the Microsoft Teams product, and how will users benefit?

Mansoor Malik (Microsoft): Microsoft Teams democratizes information. It makes it available, brings transparency to it, and ensures everyone has access to it.

With this integration, users can now access their Evernote content and share it with the whole team—in one place, and in the same channel. You don’t have to remember a URL or switch back and forth between Teams and Evernote. It’s all right there.

Leo Gong (Evernote): For a lot of our customers, Evernote is their second brain. It’s where they collect all their information and the ideas they’re working on. Combining these two places allows them to easily tap into that knowledge hub and share it with everyone.

Let’s say you’re trying to plan logistics around a product launch in Microsoft Teams. Being able to access Evernote allows you to keep a record of what people are agreeing upon, and what the current plan is—in parallel to the conversation.

Q: What is the problem that this solves for users?

MM: You may have to-dos that you want to add in Evernote, and you may want to start talking about them. You can either share a snippet of it in Teams and start a conversation that way, or you can pin it as a tab and have the conversation around that tab.

What’s cool is that the conversation you have, in context with the note that’s pinned, happens right there. It can also be persistent so it stays within the chat. So anyone from the team can either jump into that conversation in real time or, if they come in later, reply to it in the same thread, with the same context.

LG: Many people use Evernote as a repository for their business’s information. This integration helps them very easily share that information whenever they’re asked.

Also, the same questions often get asked again and again. The Pinned tab allows you to pin a note in the channel with answers to all those frequently asked questions, so it’s easily accessible for others.

Finally, there can often be 10 to 20 different messages that you need to consider when you’re making a decision. It gets unmanageable very quickly. So it’s good to have a tab, one place to keep a list of “What’s the decision we just made, and what are the next steps?”

Q. What do you think people struggle with the most when it comes to sharing information within a team setting?

MM: Before, if you wanted to share something, you’d have to open up your email and attach a Word document or a file, and send it to somebody—even your colleague who’s sitting in the next office. Then you’d have to wait for their reply, then revise it, and so on. This integration means that those conversations, those decisions, can be documented, edited, and captured in real time, so you don’t have to wait for the back and forth.

LG: I think it’s the friction around sharing information. Even beyond this initial launch, we’re interested in making that easier. How can we automate the sharing of information? That’s something we think about.

Q: In your experience, how have workflows evolved over time? Do you find that people are asking for integrations with their favorite tools often?

MM: Employees today are on twice as many teams as they were five years ago. The amount of time that employees spend engaging in collaborative work—in meetings, on phone calls, or answering emails—has increased about 50 percent. It takes up to 80 percent more of employees’ time. Notwithstanding that, productivity experiences are getting fragmented over time, leading to reduced productivity, change fatigue, and reduced employee sentiment and morale. This integration tries to reunify the experience to address these issues.

LG: Workplaces are evolving to include more specialized tools, so more than ever we see a lot of different teams, and a lot of individuals, wanting and expecting choice at work.

Even with note editing, which is a relatively simple use case, there are so many tools out there and each of them has different strengths. Integrations allow customers to use the tools that will make them effective, because they’re able to bring their own tools into their collaborative work.

Evernote integrates with all types of documents and helps people share notes very easily, so that they can choose the tools they need to make them effective. With Microsoft Teams, you don’t have to use a specific database or a specific task management tool. Teams becomes the glue that helps you and your team work together—even if they’re on different systems.

Q: When integrating with another product, is there a typical checklist you go through? What makes this partnership a good fit?

MM: We look at how we can add value to our mutual customers. Specifically, we look at common teamwork productivity scenarios and ways to make it easier for people to get their job done, to make their experience more valuable, and enhance it so that they feel like it’s easy.

Evernote is a great fit for Teams because people are already working together in teams. Having Evernote integrated there just makes sense, to help them get their job done faster.

The other thing we look at is shared vision with our partners around the digital and cultural transformation that’s happening in the modern workplace. We certainly have to snap to that.

LG: It’s the same for us. The top bar that we need to clear is: Is there a natural fit in the users’ workflows? Does this measurably make their lives better? And second, what do we have to offer Microsoft? How does this make Evernote users more successful as well? And lastly, it’s a feasibility consideration, which is: Can we build it and how quickly?

Q: From a strategic product perspective, how do you keep up with the needs of an increasingly demanding customer?

MM: We’re always listening to our customer feedback, whether it’s on Twitter, UserVoice, or within our end product feedback tool. We also look at the way people are working and features they’re asking for, whether it’s apps for mobile, or even desktop.

We’re also trying to envision what the future of work will look like on a longer-term horizon. As the workforce changes, as Millennials get on board, they definitely have new demands. We look into that, we prioritize it, and we put it in the backlog. Whatever is most asked for gets done first, and we go down the stack from there.

LG: One, it’s having an ear to the ground. We spend a lot of time talking to our customers, and often we’ll see opportunities for improvement.

Two, is doing pretty extensive testing with features that we want to launch, and making sure that we’re doing it in a way that’s actually helpful to our users. You don’t want to necessarily implement exactly what the customer is requesting because often it’s a symptom of a greater or undiscovered need. So we think about what they’re really trying to say, and what they’re really struggling with.

Q: I imagine that can be hard at times, like doing a bit of detective work.

LG: Exactly.

MM: Yep, totally agree.

Q: There has been a shift from having competitors to the idea of “playing well with others.” What is your view on adopting this approach from the technology standpoint?

MM: We’re building a product for collaboration, so we have to be collaborative. By working and playing with others, we help our customers and users get the most value. And in this particular case, it really helps increase their productivity, and users love it. So if we can increase productivity, if we can keep the user engaged, even if it’s working with a competitor or a partner, so be it. That’s why we are open and willing to let people use the tools they want to use. Because we believe that tools and technology facilitate productivity and enable customers to get more done faster.

LG: Playing well with others has always been a core value for Evernote. We help you capture your thoughts and information—wherever it comes from.

As to how we adopt it from a technology standpoint, it means building our product in a modular way so we’re not just supporting a single document type. We’re architecting the app in a way where it can accept any document type as a module, so you can plug-and-play additional ones in the future.

It’s a win-win because building a product in a way that supports integrations speeds up your own development. Your developers will thank you because when they’re trying to extend functionality into the product in the future, they will also benefit.

Q: Advancements in technology have made it possible for people to work anywhere, from any device. How can we keep up with the demands of such a highly connected workforce?

MM: Every team is different. Every individual is different, and they have their unique preferences and needs. As a platform, Microsoft Teams enables people to bring anything they want in terms of the apps and services they use the most. By doing so they can customize Microsoft Teams to fit their needs better for their increased productivity.

By allowing these types of integrations, by working well with other partners and competitors, we’re meeting the demands of a highly connected workforce. At the same time, we’re making sure, as Evernote is, that we’re cross-platform, cross-device, multi-screen. We want to make sure that wherever you are, however you’re connected, you can get your work done.

LG: In a way, the causation is a little fuzzy because having integrations enables you to work from anywhere and from any device. At the same time, integrations help you live better in a world like that.

I think where Microsoft Teams is really helpful is that it provides a hub for you to manage a lot of complexity. Because if everybody’s using 20 different apps, it becomes very difficult to manage. But if there’s some way for you to start centralizing your communications, with all of your sharing in one place, it helps people manage the overload of information.

Q: What do you see changing in the next five years with regard to the way people are working? And how are you looking to solve that with new product features and/or updates?

MM: Everybody is looking to get stuff done faster. What we are thinking, with these integrations, is how we can use machine learning or AI to help them do that.

For example, imagine you’re making a note that you need to send marketing materials for review and approval. It’d be cool if, as you’re typing or talking about it, an AI bot senses that this is actually a task that needs to be created and assigned to somebody, and then followed up on. Those are ways that we can improve productivity by doing things for people on their behalf.

Call recording, transcription, and translation is also something that we are looking into. All this stuff can get done automatically.

LG: I see there being two related trends. One is that there’s a rapid acceleration of the amount of information that people are consuming. Number two is that technology has gotten to a point where it’s actually possible to help users manage that overflow of information, so we’re at a really interesting time.

The first thing that will really help people is better aggregation and integrations. I see Evernote being the place that helps you manage your information by integrating with the tools you use to create information, and collecting all of that in one central hub.

The second piece of technology is, as Mansoor mentioned, AI and machine learning. The interesting thing that we’ll be able to do in the next five years is apply machine learning to help users make sense of information that they’re getting. Because it’s really important to be able to sift through it all and figure out what’s important.

The analogy I love to give is: If I walk into your kitchen, it might be really tidy, but I don’t know where anything is kept. Machine learning allows us to surface your items in your kitchen, in a context that makes sense with regard to how I organize and how I think.

Get started

To find out for yourself how much more your team can achieve, simply head over to the Microsoft App Store and install Evernote for Microsoft Teams today. For more information, check out this Quick Start Guide.