Tag Archives: systems

One Virtual System Worldwide: Intra-Epic interoperability

One Virtual System Worldwide, Epic Systems Corp.’s new intra-Epic interoperability framework, is getting a warm reception from the electronic health record users and others in health IT.

The new features are contained in a simple and apparently easy-to-use clinician-facing interface in the vendor’s Epic 2018 EHR system upgrade, expected to be released in February.

Health data sharing for Epic users

The functions enable different Epic healthcare providers around the world to share medical images, book appointments, search health data and text among each other. Another function that is part of the One Virtual System Worldwide, intra-Epic system messaging, had already been available.

“I’m strongly encouraged. It’s really important for the electronic medical record vendors to lower barriers to interoperability,” Brian Clay, M.D., an Epic user and chief medical informatics officer at University of California San Diego Health, told SearchHealthIT. “This move by Epic should make sharing information easier, both for providers and patients.”

Despite wide and long-standing criticism of the giant vendor for allegedly making it hard to share data from its EHR, Epic has long maintained that it has always provided full interoperability within its own user base and with outside entities, as well.

New openness for Epic

In what looks like part of a concerted new effort to combat those perceptions, the privately held company revealed the One Virtual System Worldwide concept with an upbeat news release. It may have been the first time Epic made a major announcement publicly.

Nancy Fabozzi, principal analyst of connected health at Frost & Sullivan, said she was impressed after looking over publicly available materials about One Virtual System Worldwide.

“Anything they can do to move the needle forward on interoperability is going to be appreciated in the marketplace. What’s not to like?” Fabozzi said. “The interface, from what I see, with its clean buttons, is really nice. This is exactly the kind of thing that clinicians want to see and how they want to interact with electronic health records.”

Fabozzi added that she sees the latest Epic interoperability move as a simultaneous way to open up to the outside world, answer questions about its commitment to interoperability, and stay abreast of the fast-changing healthcare and health IT markets.

Healthcare markets changing quickly

Epic understands that the world is changing very, very dramatically, and the cloistered world they had is gone.
Nancy Fabozziprincipal analyst of connected health at Frost & Sullivan

In addition to Apple’s move into health records, the healthcare industry was roiled in recent days by the blockbuster news that Amazon, Berkshire Hathaway and JPMorgan Chase are forming an independent healthcare company for their employees.

Meanwhile, huge deals — such as CVS’ $69 billion acquisition of Aetna last year — are also reshaping healthcare, and many expect Amazon and Google, among other tech giants, to make major healthcare moves.

Amid that upheaval, Fabozzi said she thinks Epic understands it is no longer an unrivaled leader in health IT, a position it occupied — along with its chief competitor, Cerner Corp., to some extent — during the meaningful use era when Epic grew explosively, as dozens of big healthcare systems standardized on its EHR platform.

“I think Epic understands that the world is changing very, very dramatically, and the cloistered world they had is gone,” Fabozzi said. “Now, it’s about optimizing these EHR systems and responding to this changing ecosystem that demands more openness and more interoperability.”

On the patient side, Epic said its MyChart patient portal already gives patients of Epic-based healthcare systems the ability to combine health data from different providers as a personal health record that is portable among different providers.

Perhaps coincidentally, Epic recently collaborated with Cerner to help develop Apple’s new personal health record system for the Apple Health app, a similarly interoperability-focused new product.

Epic’s ‘Working Together’

With One Virtual System Worldwide, Epic is expanding data sharing and other options on the provider-facing side for clinicians and other hospital staff.

These fall under the “Working Together” concept, the newest level of the three-tier system that makes up One Virtual System Worldwide.

The first tier, Come Together, consisting of gathering data in one place, and second tier, Happy Together, presenting combined health data in an easy-to-read format, are not new and have been included for several years in versions of the Epic EHR.

Epic describes Working Together as new software capabilities that enable healthcare providers to take actions across organizations.

“We’re taking interoperability from being able to ‘view more’ to being able to ‘do more,'” Dave Fuhrmann, vice president of interoperability at Epic, based in Verona, Wis., said in the release. “Over the last decade, we expanded the amount of data that customers can exchange. Now, our new functionality ‘Working Together’ will allow clinicians to work across Epic organizations to improve the care for their patients.” 

New Epic interoperability functions

These One Virtual System Worldwide features, according to the vendor, include the following:

  • Images Everywhere enables Epic users to see medical image thumbnails from other Epic providers, click on an image from the original source and retrieve an image for review.
  • Book Anywhere allows schedulers who refer a patient to another Epic provider to directly book the appointment in the other system.
  • Search Everywhere allows users to search data from other healthcare organizations on Epic and also examine free text, such as in notes and documents.

Clay, the San Diego healthcare system CMIO, noted that physicians at UC San Diego Health routinely coordinate care with nearby providers, such as Rady Children’s Hospital, another Epic user, and now clinicians from both systems can share health data faster and better.

“This will enable us to share information more easily,” he said.

Curb stress from Exchange Server updates with these pointers

systems. In my experience as a consultant, I find that few organizations have a reliable method to execute Exchange Server updates.

This tip outlines the proper procedures for patching Exchange that can prevent some of the upheaval associated with a disruption on the messaging platform.

How often should I patch Exchange?

In a perfect world, administrators would apply patches as soon as Microsoft releases them. This doesn’t happen for a number of reasons.

Microsoft has released patches and updates for both Exchange and Windows Server that cause trouble on those systems. Many IT departments have long memories, and they will let the bad feelings keep them from staying current with Exchange Server updates. This is detrimental to the health of Exchange and should be avoided. With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Another wrinkle in the update process is Microsoft releases Cumulative Updates (CUs) for Exchange Server on a quarterly schedule. CUs are updates that feature functionality enhancements for the application.

With proper planning, updates can and should be run on Exchange Server on a regular schedule.

Microsoft plans to release one CU for Exchange 2013 and 2016 each quarter, but they do not provide a set release date. The CUs may be released on the first day of one quarter, and then on the last day of the next.

Rollup Updates (RUs) for Exchange 2010 are also released quarterly. An RU is a package that contains multiple security fixes, while a CU is a complete server build.

For Exchange 2013 and 2016, Microsoft supports the current and previous CU. When admins call Microsoft for a support case, the company will ask them to update Exchange Server to at least the N-1 CU — where N is the latest CU, N-1 refers to the previous CU — before they begin work on the issue. An organization that prefers to stay on older CUs limits its support options.

Because CUs are the full build of Exchange 2013/2016, administrators can deploy a new Exchange Server from the most recent CU. For existing Exchange Servers, using a new CU for that version to update it should work without issue.

Microsoft only tests a new CU deployment with the last two CUs, but I have never had an issue with an upgrade with multiple missed CUs. The only problems I have seen when a large number of CUs were skipped had to do with the prerequisites for Exchange, not Exchange itself.

Microsoft releases Windows Server patches on the second Tuesday of every month. As many administrators know, some of these updates can affect how Exchange operates. There is no set schedule for other updates, such as .NET. I recommend a quarterly update schedule for Exchange.

How can I curb issues from Exchange Server updates?

As every IT department is different, so is every Exchange deployment. There is no single update process that works for every organization, but these guidelines can reduce problems with Exchange Server patching. Even if the company has an established patching process, if it’s missing some of the advice outlined below, then it might be a good idea to review that method.

  • Back up Exchange servers before applying patches. This might be common sense for most administrators, but I have found it is often overlooked. If a patch causes a critical failure, a recent backup is the key to the recovery effort. Some might argue that there are Exchange configurations — such as Exchange Preferred Architecture — that do not require this, but a backup provides some reassurance if a patch breaks the system.
  • Measure the performance baseline before an update. How would you know if the CPU cycles on the Exchange Server are too high after an update if this metric hasn’t been tracked? The Managed Availability feature records performance data by default on Exchange 2013 and 2016 servers, but Exchange administrators should review server performance regularly to establish an understanding of normal server behavior.
  • Test patches in a lab that resembles production. When a new Exchange CU arrives, it has been through extensive testing. Microsoft deploys updates to Office 365 long before they are publicly available. After that, Microsoft gives the CUs to its MVP community and select organizations in its testing programs. This vetting process helps catch the vast majority of bugs before CUs go to the public, but some will slip through. To be safe, test patches in a lab that closely mirrors the production environment, with the same servers, firmware and network configuration.
  • Put Exchange Server into maintenance mode before patching: If the Exchange deployment consists of redundant servers, then put them in maintenance mode before the update process. Maintenance mode is a feature of Managed Availability that turns off monitoring on those servers during the patching window. There are a number of PowerShell scripts in the TechNet Gallery that help put servers into maintenance mode, which helps administrators streamline the application of Exchange Server updates.

Rehearsals over, Violin Systems raises curtain on comeback

Violin Systems this week took the next step on its comeback trail with the launch of a monthly cloud subscription that allows customers to use Violin Flash Storage Platform hardware arrays as a billable service.

Violin CEO Ebrahim Abbasi also said the all-flash array vendor is recruiting more than 50 engineers to help complete projects. Those include upgrading arrays with 3D NAND solid-state drives (SSDs) and PCIe Gen 4 networking to support NVMe over Fabrics custom flash modules.

Violin subscription comes with guaranteed fixed price

Violin will continue to sell and support FSP arrays as an outright purchase, but the new on-demand option is intended for enterprises looking to restrain capital expenditures. Violin also offers users the option to consume FSP flash as fractional capacity under a monthly leasing program set up with financial partners.

Abbasi said the subscription underscores an effort to de-emphasize hardware in favor of more robust storage software. It also allows customers to shift storage costs from capital expenditures to an annual Opex model.

Violin Systems CEO Ebrahim AbbasiEbrahim Abbasi

“We’ve added a subscription model that allows customers to pay X amount of dollars per month for three years. After the three years are up, you can do a technology refresh [and upgrade] or keep your existing system and continue paying the same amount,” Abbasi said.

The subscription includes Violin’s installation and ongoing health checks. Violin guarantees storage at 1 cent per gigabyte per month, based on 140 TB of flash with FSP 7450 systems and presumed 4-1 data deduplication. That works out to about $250,000 for a three-year subscription. Subscription for an FSP 7650 system, without dedupe, is about 5 cents per GB.

Lifetime controller upgrades are a new battleground for all-flash array vendors. Violin’s flash upgrade mirrors programs such as Pure Storage’s Evergreen and Kaminario Flex, which enable data centers to receive updated controllers as the vendors make them available.

CEO Abassi: Violin flash makeover just getting started

Violin Systems is the new corporate name for the all-flash pioneer. Formerly Violin Memory, the vendor had a meteoric rise to the public market in 2013, fueled by sustained demand for high-performance flash storage. But Violin was unable to parlay its engineering work — it owns nearly 60 U.S. patents — into a profitable business, mainly because it was slow to develop a software stack.

The company declared bankruptcy in December 2017 after fruitless searches to find a buyer. Violin reemerged in April 2018 after receiving private funding to reorganize from hedge fund Quantum Partners.

“The single biggest reason that Violin started to falter is that it stayed focused only on the high-performance aspect of the market, while other all-flash vendors emerged to ship arrays that featured enterprise-class data services. Violin missed out on the data services part,” said Eric Burgener, a research director of storage at IT analyst firm IDC, based in Framingham, Mass.

Abbasi said those deficiencies have been remedied and he expects Violin Systems to turn a profit by 2019.

We want to move out of the intensive care unit into our own room.
Ebrahim AbbasiCEO, Violin Systems

“The company has been like a patient in the intensive care unit. That put us behind on technology development. We’re fast-tracking that now by hiring engineers and partnering to borrow engineers from other companies,” Abbasi said. “We want to move out of the intensive care unit into our own room.”

The Violin Systems 2018 product roadmap includes a new array based on 3D NAND SSDs, a flash array with support for block, file and object storage, and cloud tiering. Abbasi said Violin will deliver a proprietary NVMe over Fabrics-based flash array in 2019.

Burgener said the immediate challenge for Violin Systems is to sell new storage gear to its existing installed base before it can woo new enterprise customers. While Violin had few all-flash competitors when it first started, now all major vendors sell flash systems and plan NVMe support.

“There is a lot of synergy among customers that have stayed with Violin and know their technology,” Burgener said. “The installed base of customers needed the highest performance storage they could get. Now, Violin is producing another high-performance system that will leverage NVMe.

“The challenge facing Violin is whether its hardware architecture will produce sufficiently differentiating performance at a cost that people are willing to bear. The opportunity is there with an NVMe-based system, but it’s tough to evaluate how successful they’ll be until we have real-world data points.”

Container infrastructure a silver lining amid Intel CPU flaw fixes

Container infrastructure can help IT pros deploy updates as they fortify their systems against Meltdown and Spectre CPU vulnerabilities.

Sys admins everywhere must patch operating systems to reduce the effects of the recently discovered Intel CPU flaws, which hackers could exploit to access speculative execution data in virtual memory and, potentially, to other VMs that share the same host or root access.

However, those who run container infrastructures estimate a milder impact of this additional work than the undertaking for those who must patch VM-based infrastructures, especially manually, to combat Meltdown and Spectre.

“Most of the fixes out so far are kernel patches, and since containers share the kernel, there are fewer kernels to patch,” said Nuno Pereira, CTO of IJet International, a risk management company in Annapolis, Md.

VMware has pledged to issue fixes at the hypervisor level, and cloud providers such as Google and Amazon say they’ve patched their VMs, but it’s wise to patch the kernels, as well, Pereira said.

Security best practices dictate containers run with least-privilege access to the underlying operating system and host. That could limit the blast radius should a hacker use the Meltdown and Spectre vulnerabilities to gain access to a container. But experts emphasize that container infrastructure isn’t guaranteed immunity to the vulnerabilities, as container-level segmentation alone doesn’t fully defend against attacks.

“No one should expect that just a container layer will mitigate the issue,” said Fernando Montenegro, an analyst with 451 Research. “This issue highlights that security assumptions we’ve made in the past have to be revisited.”

Ultimately, Intel and other chipmakers, such as AMD, will have to issue hardware- or firmware-level fixes to eliminate the Meltdown and Spectre vulnerabilities. It’s not clear what those will be yet, but enterprises with container orchestration in place will have a leg up, as they accommodate those widespread changes.

“Most folks running containers have something like [Apache] Mesos or Kubernetes, and that makes it easy to do rolling upgrades on the infrastructure underneath,” said Andy Domeier, director of technology operations at SPS Commerce, a communications network for supply chain and logistics businesses based in Minneapolis. SPS uses Mesos for container orchestration, but it is evaluating Kubernetes, as well.

Containers are often used with immutable infrastructures, which can be stood up and torn down at will and present an ideal means to handle the infrastructure changes on the way, due to these specific Intel CPU flaws or unforeseen future events.

“It really hammers home the case for immutability,” said Carmen DeArdo, technology director responsible for the software delivery pipeline at Nationwide Mutual Insurance Co. in Columbus, Ohio.

Meltdown and Spectre loom over containers
Container infrastructure can help ease the pain of Meltdown and Spectre vulnerabilities.

DevOps performance concerns

No one should expect that just a container layer will mitigate the issue … security assumptions we’ve made in the past have to be revisited.
Fernando Montenegroanalyst, 451 Research

Infrastructure automation will help, but these vulnerabilities arose from CPU technology that drastically improved performance, with more efficient memory caching and pre-fetching. This means patches and infrastructure updates to mitigate security risks can slow down system performance.

PostgreSQL benchmark tests in worst-case-scenario situations show OS patches alone may degrade performance by 17% to 23%. Red Hat put out an advisory to customers stating its patches to the Red Hat Enterprise Linux kernel may reduce performance by 8% to 19% on highly cached random memory.

“For Spectre, my understanding is that you need code changes and/or recompilation of userspace programs themselves to [fully] resolve it, so it is likely to be a long slog,” said Michael Bishop, CTO at Alpha Vertex, a New York-based fintech startup.

No one knows how future hardware fixes will affect CPU performance, which raises concerns for large enterprises that have grown accustomed to quick system builds in a DevOps continuous integration and delivery process. Reports have started to emerge that the performance change will affect the time it takes to compile programs, which is of particular concern to developers who want to make quick, frequent updates to apps.

“I remember when build jobs would run for hours, and we could go back to a developer mindset of, ‘Get things perfect,’ if feedback loops start to take too long,” Nationwide’s DeArdo said. “Eventually, that would impact lead time and productivity.”

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Reduxio Systems’ storage wows human resources specialist

Reduxio Systems’ storage has gone from curiosity to mainstay at human resources software firm CPP Inc.

The maker of personality-assessment software initially installed Reduxio HX550 hybrid arrays to support standard systems for development, quality assurance and testing. Impressed by the performance, CPP has promoted the Reduxio SAN to handle mission-critical applications and a select number of primary workloads.

The plan is to eventually move most tier-one storage from existing SAN environments to Reduxio to take advantage of its capacity, native data protection and performance scaling, said Mike Johnson, director of global infrastructure and desktop support at CPP, based in Sunnyvale, Calif.

“I’ve always figured there isn’t one storage device that gives you all three of those things, but it’s looking like Reduxio Systems has the potential,” Johnson said.

CPP has two Reduxio HX550 hybrid arrays at its main data center in Sunnyvale and two others at a newly opened facility in the U.K.

Reduxio hybrid flash augments all-flash IBM V9000 primary SAN

The Reduxio HX550 Enterprise Flash Storage hybrid flagship is a dual-controller system housed in a 2U Seagate server chassis. The system accommodates 24 disk drives or SAS-connected SSDs, with enterprise multi-level cell NAND flash SSDs for 40 TB of raw block storage. Effective capacity scales to 150 TB of usable storage with Reduxio NoDup global inline data deduplication.

Reduxio Systems deduplicates data in 8K blocks in a pre-memory buffer. A unique timestamp is applied to each block in the databases. A separate database for metadata includes log data on which blocks received writes and when.

Until 2002, CPP was known as Corporate Psychologists Press Inc. The firm sells human resources software to corporations and career-minded individuals, and it’s best known for its flagship Myers-Briggs Type Indicator-certified assessment.

Over the years, CPP has used storage appliances from Dell EMC, NetApp, Hitachi Vantara and other vendors. CPP still uses an all-flash IBM V9000 SAN to support a Microsoft Dynamics AX enterprise resource planning system and related production systems, as well as a scale-out Coho Data DataStream SAN to increase capacity or performance on the fly.

Although the IBM V9000 is “one of the highest-performing SANs I’ve ever seen,” Johnson said it has limited capacity for all of CPP’s primary storage. The Coho Data storage is “plug-and-play,” but requires the upfront expense of customized Arista network switches.

Compounding the challenge is the demise of Coho Data, which went out of business in September.

Johnson credited a reseller with introducing him to Reduxio Systems. CPP had already purchased the IBM and Coho Data gear by that time, but Johnson was intrigued enough by Reduxio to give it a test run.

“I was willing to put it in as our tier-three storage device, but I didn’t know how it would perform,” he said. “Once we saw the performance was pretty good, we promoted it to our mission-critical workloads.”

Reduxio BackDating aids faster disaster recovery

Johnson’s IT team did further testing and research designed to answer a key question: Could Reduxio storage reliably support CPP’s moneymaking activities? Johnson said he was pleased at Reduxio’s ability to deliver primary storage performance without relying exclusively on flash.

Johnson said he also likes the native data protection in Reduxio’s TimeOS operating system, especially the BackDating that allows recovery to any-point-in-time snapshot. Reduxio Systems recently added NoMigrate replication and NoRestore copy data management.

“We decided our revenue-generating systems could reside on the Reduxio storage device,” Johnson said. “Our plan going forward is to put all our revenue-generating systems on Reduxio and reduce our recovery point objectives and recovery time objectives from hours to days to seconds to minutes.”

Progress Health Cloud tackles healthcare IT integration

The result of hospitals going digital has been the creation of disparate systems and applications, often at the expense of healthcare IT integration and a unified ecosystem in which data can flow seamlessly from one app to another.

Progress Software Corp., a software platform company based in Bedford, Mass., is hoping to fix this problem with its Progress Health Cloud, which was released today. The new enterprise cloud fully integrates front-end, back-end and data connectivity technologies into a serverless and HIPAA-compliant platform for quickly creating apps to drive patient engagement and better healthcare outcomes, Progress said in a press release.

Although Progress Health Cloud can’t solve large-scale patient data interoperability issues, it can ease associated healthcare IT integration problems, Dmitri Tcherevik, CTO at Progress, said in an interview with SearchHealthIT. “I don’t think an IT solution can be used to address a political [problem] that may exist in any particular organization … but we can help resolve it by providing an interoperable system and an integration platform,” he said. “No one has technology obstacles and difficulties as an excuse to integration, given the platform that we offer.”

The goal: A unified healthcare IT integration system

Progress’ Health Cloud comes equipped with HIPAA-compliant, prepackaged healthcare application templates, as well as prepackaged integrations with EHRs, according to the company. Some of the EHR vendors already on board include Epic Systems, Athenahealth, Cerner and GE Healthcare.

No one has technology obstacles and difficulties as an excuse to integration, given the platform that we offer.
Dmitri TcherevikCTO at Progress

Progress’ goal is to create a unified, connected health IT ecosystem. “EHR systems are responsible for creation, collection, management and retrieval of health records in a standard format that ensures interoperability with other elements of a healthcare IT solution,” Tcherevik said. “Progress Health Cloud provides a platform for assembling a complete IT solution that includes EHR as one of its elements.”

In addition to integrating with a variety of EHR vendors, Progress’ Health Cloud also enables healthcare IT integration with applications such as patient care management, remote patient monitoring, patient self-service, preventive healthcare and community engagement, Tcherevik said.

“Each such solution requires cross-platform mobile applications, a range of cloud services and connectors to EHR systems,” he said. “Progress Health Cloud provides application templates, cloud services, connectors to EHR systems and other sources of data.”

Health Cloud uses Kinvey serverless technology

All of this runs on the Kinvey serverless cloud platform, which Progress acquired over the summer, and it includes NativeScript components, which makes it easy to build cross-platform mobile experiences, the press release said.

“Typically, when a company needs to deploy an application, it also needs to provision cloud services … like authentication, data management and event management. And provisioning those services also requires provisioning servers, machines,” he said. “It can be a virtual machine or a physical machine somewhere in the data center, and that requires a lot of effort … and that may also be expensive.”

Dmitri Tcherevik, CTO, ProgressDmitri Tcherevik

With a serverless platform, Tcherevik explained, if health IT developers build an application, they do not have to manage virtual or physical machines or servers. Rather, they only need to write code to form certain functions or services and then deploy that code on Progress’ Health Cloud platform, which will take care of the computing resources and the storage required to run the code, he said.

“There are no servers that developers have to deal with, and because of that … [Progress Health Cloud] simplifies and accelerates the whole process of application development,” Tcherevik said.

Progress has piloted its Health Cloud with organizations such as Boston Scientific, Johnson & Johnson, Athenahealth and BayHealth Development, a University of California San Francisco Health and John Muir Health joint-venture company that is focused on infrastructure development to support the needs of Canopy Health accountable care network.

For Sale – A few office / student PCs

I’ve got a few systems that i got off my brother in law after he closed down a business.

Ive gone over all of them, tested, cleaned and reinstalled windows fresh so all working as they should.

2 x HP Pro 3300 SFF – £90 Each

  • Intel i3 2120 @ 3.30GHz (4 threads)
  • 4GB DDR3 Ram
  • 500GB Hard Drive
  • DVD / CD Burner
  • Genuine W10 Pro (With Serial Sticker)
  • Wireless N PCI-E Network Card

Packard bell – £70

  • Intel i3 540 @ 3.06GHz (4 threads)
  • 3GB DDR3 Ram
  • 500GB Hard Drive
  • DVD / CD Burner
  • Genuine W10 Home (With Serial Sticker)

HP Compaq DC5800 – £35

  • Intel Core 2 Duo E4600 @ 2.40Ghz
  • 2GB DDR2 Ram
  • 250GB Hard Drive
  • DVD / CD Burner
  • Windows 10 Home (Trial – You will require your own activation key)

Compaq CQ2200UK – £25

  • Intel Atom 230 @ 1.60Ghz (2 threads)
  • 1GB DDR2 Ram
  • 160GB Hard Drive
  • DVD / CD Burner
  • Windows 10 Home (Trial – You will require your own activation key)
  • Wireless N Network Card

Price and currency: Varies
Delivery: Delivery cost is included within my country
Payment method: BT / PPG
Location: Bradford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Will PowerShell Core 6 fill in missing features?

Administrators who have embraced PowerShell to automate tasks and manage systems will need to prepare themselves…

“;
}
});

/**
* remove unnecessary class from ul
*/
$(“#inlineregform”).find( “ul” ).removeClass(“default-list”);

/**
* Replace “errorMessageInput” class with “sign-up-error-msg” class
*/
function renameErrorMsgClass() {
$(“.errorMessageInput”).each(function() {
if ($(this).hasClass(“hidden”)) {
$(this).removeClass(“errorMessageInput hidden”).addClass(“sign-up-error-msg hidden”);
} else {
$(this).removeClass(“errorMessageInput”).addClass(“sign-up-error-msg”);
}
});
}

/**
* when validation function is called, replace “errorMessageInput” with “sign-up-error-msg”
* before return
*/
function validateThis(v, form) {
var validateReturn = urValidation.validate(v, form);
renameErrorMsgClass();
return validateReturn;
}

/**
* DoC pop-up window js – included in moScripts.js which is not included in responsive page
*/
$(“#inlineRegistration”).on(“click”,”a.consentWindow”, function(e) {
window.open(this.href, “Consent”, “width=500,height=600,scrollbars=1”);
e.preventDefault();
});

as Microsoft plans to focus its energies in the open source version called PowerShell Core.

All signs from Microsoft indicate it is heading away from the Windows-only version of PowerShell, which the company said it will continue to support with critical fixes — but no further upgrades. The company plans to release PowerShell Core 6 shortly. Here’s what admins need to know about the transition.

What’s different with PowerShell Core?

PowerShell Core 6 is an open source configuration management and automation tool from Microsoft. As of this article’s publication, Microsoft made a release candidate available in November. PowerShell Core 6 represents a significant change for administrators because it shifts from a Windows-only platform to accommodate heterogeneous IT shops and hybrid cloud networks. Microsoft’s intention is to give administrative teams a single tool to manage Linux, macOS and Windows systems.

What features are not in PowerShell Core?

PowerShell Core runs on .NET Core and uses .NET Standard 2.0, the latter is a common library that helps make some current Windows PowerShell modules work in PowerShell Core.

As a subset of the .NET Framework, PowerShell Core misses out on some useful features in Windows PowerShell. For example, workflow enables admins to execute tasks or retrieve data through a sequence of automated steps. This feature is not in PowerShell Core 6. Similarly, tasks such as sequencing, checkpointing, resumability and persistence are not available in PowerShell Core.

A few other features missing from PowerShell Core 6 are:

  • Windows Presentation Foundation: This is the group of .NET libraries that enable coders to build UIs for scripts. It offers a common platform for developers and designers to work together with standard tools to create Windows and web interfaces.
  • Windows Forms: In PowerShell 5.0 for Windows, the Windows Forms feature provides a robust platform to build rich client apps with the GUI class library on the .NET Framework. To create a form, the admin loads the System.Windows.Forms assembly, creates a new object of type system.windows.forms and calls the ShowDialog method. With PowerShell Core 6, administrators lose this capability.
  • Cmdlets: As of publication, most cmdlets in Windows PowerShell have not been ported to PowerShell Core 6. However, the compatibility with .NET assemblies enables admins to use the existing modules. Users on Linux are limited to modules mostly related to security, management and utility. Admins on that platform can use the PowerShellGet in-box module to install, update and discover PowerShell modules. PowerShell Web Access is not available for non-Windows systems because it requires Internet Information Services, the Windows-based web server functionality.
  • PowerShell remoting: Microsoft ports Secure Socket Shell to Windows, and SSH is already popular in other environments. That means SSH-based remoting for PowerShell is likely the best option for remoting tasks. Modules such as Hyper-V, Storage, NetTCPIP and DnsClient have not been ported to PowerShell Core 6, but Microsoft plans to add them.

Is there a new scripting environment?

For Windows administrators, the PowerShell Integrated Scripting Environment (ISE) is a handy editor that admins use to write, test and debug commands to manage networks. But PowerShell ISE is not included in PowerShell Core 6, so administrators must move to a different integrated development environment.

Microsoft recommends admins use Visual Studio Code (VS Code). VS Code is a cross-platform tool and uses web technologies to provide a rich editing experience across many languages. However, VS Code lacks some of PowerShell ISE’s features, such as PSEdit and remote tabs. PSEdit enables admins to edit files on remote systems without leaving the development environment. Despite VS Code’s limitations, Windows admins should plan to migrate from PowerShell ISE and familiarize themselves with VS Code.

What about Desired State Configuration?

Microsoft offers two versions of Desired State Configuration: Windows PowerShell DSC and DSC for Linux. DSC helps administrators maintain control over software deployments and servers to avoid configuration drift.

Microsoft plans to combine these two options into a single cross-platform version called DSC Core, which will require PowerShell Core and .NET Core. DSC Core is not dependent on Windows Management Framework (WMF) and Windows Management Instrumentation (WMI) and is compatible with Windows PowerShell DSC. It supports resources written in Python, C and C++.

Debugging in DSC has always been troublesome, and ISE eased that process. But with Microsoft phasing out ISE, what should admins do now? A Microsoft blog says the company uses VS Code internally for DSC resource development and plans to release instructional videos that explain how to use the PowerShell extension for DSC resource development.

PowerShell Core 6 is still in its infancy, but Microsoft’s moves show the company will forge ahead with its plan to replace Windows PowerShell. This change brings a significant overhaul to the PowerShell landscape, and IT admins who depend on this automation tool should pay close attention to news related to its development.

Dig Deeper on Microsoft Windows Scripting Language