Tag Archives: first

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

Employing data science, new research uncovers clues behind unexplainable infant death – Microsoft on the Issues

Imagine losing your child in their first year of life and having no idea what caused it. This is the heartbreaking reality for thousands of families each year who lose a child to Sudden Unexpected Infant Death (SUID). Despite decades-long efforts to prevent SUID, it remains the leading cause of death for children between one month and one year of age in developed nations. In the U.S. alone, 3,600 children die unexpectedly of SUID each year.

For years, researchers hypothesized that infants who died due to SUID in the earliest stages of the life differed from those dying of SUID later. Now, for the first time, we know, thanks to the single largest study ever undertaken on the subject, this is statistically the case.

Working in collaboration with Tatiana Anderson and Jan-Marino Ramirez at Seattle Children’s Research Institute and Edwin Mitchell at University of Auckland, we analyzed the Center for Disease Control (CDC) data on every child born in the U.S. over a decade, including over 41 million births and 37,000 SUID deaths. We compared all possible groups by the age at the time of death to understand if these populations were different.”

In our study published today in Pediatrics, a leading pediatric journal, we found that SUID deaths during the first week of life, were statistically different from all other SUID deaths that occur between the first week and first year of life. SUID cases in the first week of life have been called SUEND, which stands for Sudden Unexpected Early Neonatal Death. We have called SUID deaths between 7-364 days postperinatal SUID.

The two groups – SUEND and postperinatal SUID – differed by several factors such as birth order, maternal age and marital status. For postperinatal deaths, the risk of SUID progressively while the opposite was true for SUEND deaths where firstborn children were more at risk. Postperinatal SUID rates were higher for unmarried, young mothers (between 15-24 years old) at birth, while unmarried, young mothers of the same age showed a decreased risk of SUEND death. The two groups also had different distributions of birthweight and pregnancy length.

Our study concluded that SUID deaths in the first week differed from postperinatal SUID deaths and that the two groups should be considered separately in future research. Considering these two as different causes may help uncover independent underlying physiological mechanisms and/or genetic factors.

This research is part of Microsoft’s AI for Good initiative, a $125 million five-year program where we utilize AI to help tackle some of the world’s greatest challenges and helping some of the world’s most vulnerable populations. For this research, we leveraged our machine learning, cloud-computing capabilities and advanced modelling techniques powered by AI to analyze the data.

By pairing our capabilities and data scientists with Seattle Children’s medical research expertise, we’re continuing to make progress on identifying the cause of SUID. Earlier this year, we published a study that estimated approximately 22% of SUID deaths in the U.S. were attributable to maternal cigarette-smoking during pregnancy, giving us further evidence that, through our collaboration with experts in varying disciplines, we are getting to the root of this problem and making remarkable advances.

We hope our progress in piecing together the SUID puzzle ultimately saves lives, and gives parents and researchers hope for the future.

Tags: , , , , , ,

Go to Original Article
Author: Microsoft News Center

Turning the next generation into everyday superheroes thanks to Hour of Code 2019 – Microsoft News Centre Europe

When you think of coding, your first thoughts might be about highly specialized technical know-how. But did you know that effective coding requires skills like creativity, innovation and collaboration too – all of which will be hugely important for the workforce of tomorrow?

According to Microsoft research with McKinsey, the fastest growing occupations, such as technology professionals and healthcare providers, will require a combination of digital and cognitive skills such as digital literacy, problem solving and critical thinking. Young people having access to learning tools to improve both these sets of skills is crucial – a fact non-profit organizations like JA Europe recognize through their work to get young people ready for the future of work. If young people are given the opportunity to develop their digital skills, the European Labor Market will see significant benefits when they move into the workforce. According to a LinkedIn Economic Graph report, AI Talent in the European Labour Market, training and upskilling ‘near-AI’ talent could double the size of the current AI workforce in the EU. It also found that AI skills are concentrated in a small number of countries and that this must be addressed to reduce the digital skills gap in Europe.

In conjunction with Computer Science Education Week which began yesterday and extends to December 15, Microsoft continues its multi-year commitment to Hour of Code, a global movement that introduces students to computer science and demystifies what coding is all about. Activities are running across Europe to fuel imagination and demonstrate how these skills could be used to solve some of the world’s biggest problems. As such, code has the power to turn anyone into an everyday superhero.

To bring this to life, Microsoft is inviting young people to ‘save the day’ through Computer Science. Created in partnership with MakeCode, a new Minecraft tutorial combines code, Artificial Intelligence and problem solving skills. It is inspired by various Microsoft AI for Earth projects and encourages students to use their critical thinking skills to plot where forest fires could happen, put plans in place to stop them with AI and ultimately save the Minecraft village!

Since 2012, Microsoft has helped more than 137,000 young people and educators in Europe through Hour of Code events and programs. And, as the end of the decade draws near, we are keen to support even more people to get into coding and show how it can change the world. If you’re looking to help your children or students become coding superheroes, we have developed two training guides – one for students and one aimed at educators – no cape needed!

Go forth and code!

Go to Original Article
Author: Microsoft News Center

For Sale – [Birmingham/M6 J7] 2019 Apple MacBook Pro, 13.3″, Space Grey, 128GB SSD/8GB RAM – 1 month old – PRICE DROP

FOR SALE: Apple MacBook Pro 13.3″, Space Grey, 128GB SSD / 8GB RAM

This is my first sale on AV Forums although I have been a member for many years (last actively posting in 2012-ish). I understand that there is a high element of trust involved so I will do what I can (within reason) to provide confidence to the buyer. More than happy to provide as much identification, photographs/videos of the laptop etc as the buyer requires.

Reason for sale is to revert back to a Windows-based laptop (for half the price) due to personal preference. I love Apple and have had many MacBook Pros and Airs in the past, sadly my partner doesn’t feel the same way!

Bought from Currys PC World in Wednesbury on 8 November 2019 (just over a month ago). This is the base specification model 13-inch MacBook Pro which at the time of writing is currently being sold for £1299 on the Apple UK website. It has been in regular use since purchase and the battery has just over 33 cycles but mostly used for watching Netflix. It is in excellent condition (please see photographs), handled with care at all times and has been stored in a smoke-free, pet-free home.

Specification:

  • Apple MacBook Pro (2019), 13.3″ LED-backlit Retina IPS display (2560×1600)
  • Space Grey
  • 1.4GHz quad-core Intel Core i5 (Turbo boost up to 3.9GHz)
  • 8GB 2133MHz LPDDR3 memory
  • 128GB SSD
  • Intel Iris Plus Graphics 645
  • Touch Bar (with Touch ID fingerprint scan)

Comes in original box, USB-C mains power cable. Includes balance of Apple manufacturer warranty until November 7, 2020, as well as Apple telephone support until February 6, 2020. Battery cycle count at time of writing is 33 – this may increase slightly as laptop in use.

Price:
£1,020
£995

NOW £950

Payment:
Cash on collection preferred. Cash mandatory if collecting.
Alternatively, I will accept bank transfer if you would like me to post it to you via courier delivery.
I do NOT accept PayPal or any other payment method. Please no offers of trade or part exchange.

Courier delivery:
Either you or I can organise courier delivery, with the cost of this added to the agreed sale price. Courier will be insured, tracked and will require a signature upon delivery. If I organise courier, I will provide the tracking information as soon as possible to you. For my security, I will record a video the laptop working, being placed into its box, packaged and sealed. I will include the original proof of purchase, as well as a receipt from myself to the buyer (in the form of a typed letter) which will include my signature.

Collection:
If collecting, cash payment is mandatory and once sale is agreed must be collected within 24 hours unless agreed otherwise. Collection will be from Aldridge (WS9 postcode), which is about 15 minutes north of Birmingham, and close to J7 of the M6. Collection location is covered by CCTV for both buyer/seller safety and is next to Aldridge Police Station. Buyer obviously welcome to inspect and test laptop.

Go to Original Article
Author:

SAP sees S/4HANA migration as its future, but do customers?

The first part of our 20-year SAP retrospective examined the company’s emerging dominance in the ERP market and its transition to the HANA in-memory database. Part two looks at the release of SAP S/4HANA in February 2015. The “next-generation ERP” was touted by the company as the key to SAP’s future, but it ultimately raised questions that in many cases have yet to be answered. The issues surrounding the S/4HANA migration remain the most compelling initiative for the company’s future.

Questions about SAP’s future have altered in the past year, as the company has undergone an almost complete changeover in its leadership ranks. Most of the SAP executives who drove the strategy around S/4HANA and the intelligent enterprise have left the company, including former CEO Bill McDermott. New co-CEOs Jennifer Morgan and Christian Klein are SAP veterans, and analysts don’t think the change in leadership will make for significant changes in the company’s technology and business strategy.

But they will take over the most daunting task SAP has faced: convincing customers of the business value of the intelligent enterprise, a data-driven transformation of businesses with S/4HANA serving as the digital core. As part of the transition toward intelligence, SAP is pushing customers to move off of tried and true SAP ECC ERP systems (or the even older SAP R/3), and onto the modern “next-generation ERP” S/4HANA. SAP plans to end support for ECC by 2025.

Dan LahlDan Lahl

S/4HANA is all about enabling businesses to make decisions in real time as data becomes available, said Dan Lahl, SAP vice president of product marketing and a 24-year SAP veteran.

“That’s really what S/4HANA is about,” Lahl said. “You want to analyze the data that’s in your system today. Not yesterday’s or last week’s information and data that leads you to make decisions that don’t even matter anymore, because the data’s a week out. It’s about giving customers the ability to make better decisions at their fingertips.”

S/4HANA migration a matter of when, not if

Most SAP customers see the value of an S/4HANA migration, but they are concerned about how to get there, with many citing concerns about the cost and complexity of the move. This is a conundrum that SAP acknowledges.

“We see that our customers aren’t grappling with if [they are going to move], but when,” said Lloyd Adams, managing director of the East Region at SAP America. “One of our responsibilities, then, is to provide that clarity and demonstrate the value of S/4HANA, but to do so in the context of the customers’ business and their industry. Just as important as showing them how to move, we need to do it as simply as possible, which can be a challenge.”

Lloyd AdamsLloyd Adams

S/4HANA is the right platform for the intelligent enterprise because of the way it can handle all the data that the intelligent enterprise requires, said Derek Oats, CEO of Americas at SNP, an SAP partner based in Heidelberg, Germany that provides migration services.

In order to build the intelligent enterprise, customers need to have a platform that can consume data from a variety of systems — including enterprise applications, IoT sensors and other sources — and ready it for analytics, AI and machine learning, according to Oats. S/4HANA uses SAP HANA, a columnar, in-memory database, to do that and then presents the data in an easy-to-navigate Fiori user interface, he said.

“If you don’t have that ability to push out of the way a lot of the work and the crunching that has often occurred down to the base level, you’re kind of at a standstill,” he said. “You can only get so much out of a relational database because you have to rely on the CPU at the application layer to do a lot of the crunching.”

S/4HANA business case difficult to make

Although many SAP customers understand the benefits of S/4HANA, SAP has had a tough sell in getting its migration message across to its large customer base. The majority of customers plan to remain on SAP ECC and have only vague plans for an S/4HANA migration.

Joshua GreenbaumJoshua Greenbaum

“The potential for S/4HANA hasn’t been realized to the degree that SAP would like,” said Joshua Greenbaum, principal at Enterprise Applications Consulting. “More companies are really looking at S/4HANA as the driver of genuine business change, and recognize that this is what it’s supposed to be for. But when you ask them, ‘What’s your business case for upgrading to S/4HANA?’ The answer is ‘2025.’”

The real issue with S/4HANA is that the concepts behind it are relatively big and very specific to company, line of business and geography.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

One of the problems that SAP faces when convincing customers of the value of S/4HANA and the intelligent enterprise is that no simple use case drives the point home, Greenbaum said. Twenty years ago, Y2K provided an easy-to-understand reason why companies needed to overhaul their enterprise business systems, and the fear that computers wouldn’t adapt to the year 2000 led in large measure to SAP’s early growth.

“Digital transformation is a complicated problem and the real issue with S/4HANA is that the concepts behind it are relatively big and very specific to company, line of business and geography,” he said. “So the use cases are much harder to justify, or it’s much more complicated to justify than, ‘Everything is going to blow up on January 1, 2000, so we have to get our software upgraded.'”

Evolving competition faces S/4HANA

Jon Reed, analyst and co-founder of ERP news and analysis firm Diginomica.com, agrees that SAP has successfully embraced the general concept of the intelligent enterprise with S/4HANA, but struggles to present understandable use cases.

Jon ReedJon Reed

“The question of S/4HANA adoption remains central to SAP’s future prospects, but SAP customers are still trying to understand the business case,” Reed said. “That’s because agile, customer-facing projects get the attention these days, not multi-year tech platform modernizations. For those SAP customers that embrace a total transformation — and want to use SAP tech to do it — S/4HANA looks like a viable go-to product.”

SAP’s issues with driving S/4HANA adoption may not come from the traditional enterprise competitors like Oracle, Microsoft and Infor, but from cloud-based business applications like Salesforce and Workday, said Eric Kimberling, president of Third Stage Consulting, a Denver-based firm that provides advice on ERP deployments and implementations.

Eric KimberlingEric Kimberling

“They aren’t direct competitors with SAP; they don’t have the breadth of functionality and the scale that SAP does, but they have really good functionality in their best-of-breed world,” Kimberling said. “Companies like Workday and Salesforce make it easier to add a little piece of something without having to worry about a big SAP project, so there’s an indirect competition with S/4HANA.”

SAP customers are going to have to adapt to evolving enterprise business conditions regardless of whether or when they move to S/4HANA, Greenbaum said.

“Companies have to build business processes to drive the new business models. Whatever platform they settle on, they’re going to be unable to stand still,” he said. “There’s going to have to be this movement in the customer base. The question is will they build primarily on top of S/4HANA? Will they use an Amazon or an Azure hyperscaler as the platform for innovation? Will they go to their CRM or workforce automation tool for that? The ‘where’ and ‘what next’ is complicated, but certainly a lot of companies are positioning themselves to use S/4HANA for that.”

Go to Original Article
Author:

The First Wave of Xbox Black Friday Deals Has Arrived: Discounts on Sea of Thieves and Select Xbox Wireless Controllers – Xbox Wire

The holidays will be here before you know it, and to kick off the start of November, we are unveiling the first wave of Xbox Black Friday discounts. This is just a sample of our entire Black Friday deals – tune in via Mixer for a special episode of Inside Xbox live from X019 in London on Thursday, November 14 at 12:00 p.m. PT for the full lineup of Xbox Black Friday discounts and offers. You won’t want to miss out!

First up, we are offering a 50% discount on Sea of Thieves: Anniversary Edition, the fastest-selling first-party new IP of this generation. Join this multiplayer, shared-world adventure game featuring new modes like the story driven Tall Tales or The Arena, a competitive multiplayer experience on the high seas. Xbox Live Gold is required to play Sea of Thieves: Anniversary Edition and is sold separately.

Fans can also save up to $20 on select Xbox Wireless Controllers, including some of the newest controllers in the Xbox collection. Snag the Night Ops Camo Special Edition, Sport Blue Special Edition, Gears 5 Kait Diaz Limited Edition controllers and many more at the lowest prices of the season.

Deals are valid starting on November 24 and run through December 2, 2019. Plus, Black Friday kicks off even earlier for Xbox Game Pass Ultimate and Xbox Live Gold members, with Early Access beginning on November 21.

Visit Xbox.com, Microsoft Store and participating retailers globally for more details on availability and pricing as deals will vary between regions and retailers. See here for more Black Friday deals from Microsoft Store.

Xbox has something for everyone on your gift this list year, and at every price point. Be sure to tune in to Inside Xbox at X019 on Thursday, November 14 at 12:00 p.m. PT for the full lineup of Xbox Black Friday deals.

Go to Original Article
Author: Microsoft News Center

HPE Cray ClusterStor E1000 arrays tackle converged workloads

Supercomputer maker Cray has pumped out revamped high-density ClusterStor storage, its first significant product advance since being acquired by Hewlett Packard Enterprise.

The new Cray ClusterStor E1000 launched this week, six months after HPE’s $1.3 billion acquisition of Cray in May. Engineering of the E1000 began before the HPE acquisition.

Data centers can mix the dense Cray ClusterStor E1000 all-flash and disk arrays to build ultrafast “exascale” storage clusters that converge processing for AI, modeling and simulation and similar data sets, said Ulrich Plechschmidt, Cray lead manager.

The E1000 arrays run a hardened version of the Lustre open source parallel file system. The all-flash E1000 provides 4.5 TB of raw storage per SSD rack, with expansion shelves that add up to 4.6 TB. The all-flash model system potentially delivers up to 1.6 TB of throughout per second and 50 million IOPS per SSD rack, while an HDD rack is rated at 120 Gbps and 10 PB of raw capacity.

When fully built out, Plechschmidt said ClusterStor can scale to 700 PB of usable capacity in a single system, with throughput up to 10 PB per second.

Cray software stack

Cray ClusterStor disk arrays pool flash and disk within the same file system. ClusterStor E1000 includes Cray-designed PCIe 4.0 storage servers that serve data from NVMe SSDs and spinning disk. Cray’s new Slingshot 200 Gbps interconnect top-of-rack switches manage storage traffic.

The most impressive work Cray did is on the software side. You might have to stage data in 20 different containers at the same time, each one outfitted differently. … That’s a very difficult orchestration process.
Steve ConwayCOO and senior research vice president, Hyperion Research

Newly introduced ClusterStor Data Services manage orchestration and data tiering, which initially will be available as scripted tiering for manually invoking Lustre software commands. Automated data movement and read-back/write-through caching are on HPE’s Cray roadmap.

While ClusterStor E100 hardware has massive density and low-latency throughout, Cray invested significantly in upgrading its software stack, said Steve Conway, COO and senior research vice president at Hyperion Research, based in St. Paul, Minn.

“To me, the most impressive work Cray did is on the software side. You might have to stage data in 20 different containers at the same time, each one outfitted differently. And you have to supply the right data at the right time and might have to solve the whole problem in milliseconds. That’s a very difficult orchestration process,” Conway said.

The ClusterStor odyssey

HPE is the latest in a string of vendors to take ownership of ClusterStor. Seagate Technology acquired original ClusterStor developer Xyratex in 2013, then in 2017 sold ClusterStor to Cray, which had been a Seagate OEM partner.

Cray ClusterStor E1000
HPE-owned Cray released new all-flash and disk ClusterStor arrays for AI, containerized workloads.

HPE leads the high-performance computing (HPC) market in overall revenue, but it has not had a strong presence in the high end of the supercomputing market. Buying Cray allows HPE to sell more storage for exascale computing, which represents a thousandfold increase above petabyte-scale processing computing power. These high-powered exascale systems are priced beyond the budgets of most commercial enterprises.

Cray’s Shasta architecture underpins three large supercomputing sites at federal research labs: Argonne National Laboratory in Lemont, Ill.;. Lawrence Livermore National Laboratory in Livermore, Calif.; and Oak Ridge National Laboratory in Oak Ridge, Tenn.

Cray last year won a $146 million federal contract to architect a new supercomputer at Livermore’s National Energy Research Scientific Computing Center. That system will use Cray ClusterStor storage.

Conway said Cray and other HPC competitors are under pressure to expand to address newer abstraction methods for processing data, including AI, container storage and microservices architecture.

“You used to think of supercomputers as a single-purpose steak knife. Now they have to be a multipurpose Swiss Army knife. The newest generation of supercomputers are all about containerization and orchestration of data on premises,” Conway said. “They have to be much more heterogeneous in what they do, and the storage has to follow suit.”

Go to Original Article
Author:

How to manage Server Core with PowerShell

After you first install Windows Server 2019 and reboot, you might find something unexpected: a command prompt.

While you’re sure you didn’t select the Server Core option, Microsoft now makes it the default Windows Server OS deployment for its smaller attack surface and lower system requirements. While you might remember DOS commands, those are only going to get you so far. To deploy and manage Server Core, you need to build your familiarity with PowerShell to operate this headless flavor of Windows Server.

To help you on your way, you will want to build your knowledge of PowerShell and might start with the PowerShell integrated scripting environment (ISE). PowerShell ISE offers a wealth of features for the novice PowerShell user, including auto complete of commands to context-colored commands to step you through the scripting process. The problem is PowerShell ISE requires a GUI or the “full” Windows Server. To manage Server Core, you have the command window and PowerShell in its raw form.

Start with the PowerShell basics

To start, type in powershell to get into the environment, denoted by the PS before the C: prompt. A few basic DOS commands will work, but PowerShell is a different language. Before you can add features and roles, you need to set your IP and domain. It can be done in PowerShell, but this is laborious and requires a fair amount of typing. Instead, we can take a shortcut and use sconfig to compete the setup. After that, we can use PowerShell for additional administrative work.

PowerShell uses a verb-noun format, called cmdlets, for its commands, such as Install-WindowsFeature or Get-Help. The verbs have predefined categories that are generally clear on their function. Some examples of PowerShell cmdlets are:

  • Install: Use this PowerShell verb to install software or some resource to a location or initialize an install process. This would typically be done to install a windows feature such as Dynamic Host Configuration Protocol (DHCP).
  • Set: This verb modifies existing settings in Windows resources, such as adjusting networking or other existing settings. It also works to create the resource if it did not already exist.
  • Add: Use this verb to add a resource or setting to an existing feature or role. For example, this could be used to add a scope onto the newly installed DHCP service.
  • Get: This is a resource retriever for data or contents of a resource. You could use Get to present the resolution of the display and then use Set to change it.

To install DHCP to a Server Core deployment with PowerShell, use the following commands.

Install the service:

Install-WindowsFeature –name 'dhcp'

Add a scope for DHCP:

Add-DhcpServerV4Scope –name "Office" –StartingRange 192.168.1.100 -EndRange 192.168.1.200 -SubnetMask 255.255.255.0

Set the lease time:

Set-DHCPSet-DhcpServerv4Scope -ScopeId 192.168.1.100 -LeaseDuration 1.00:00:00

Check the DHCP IPv4 scope:

Get-DhcpServerv4Scope

Additional pointers for PowerShell newcomers

Each command has a purpose and means you have to know the syntax, which is the hardest part of learning PowerShell. Not knowing what you’re looking for can be very frustrating, but there is help. The Get-Help displays the related commands for use with that function or role.

Part of the trouble for new PowerShell users is this can still be overwhelming to memorize all the commands, but there is a shortcut. As you start to type a command, the tab key auto-completes the PowerShell commands. For example, if you type Get-Help R and press the tab key, PowerShell will cycle through the commands, such as the command Remove-DHCPServerInDC, see Figure 1. When you find the command you want and hit enter, PowerShell presents additional information for using that command. Get-Help even supports wildcards, so you could type Get-Help *dhcp* to get results for commands that contain that phrase.

Get-Help command
Figure 1. Use the Get-Help command to see the syntax used with a particular PowerShell cmdlet.

The tab function in PowerShell is a savior. While this approach is a little clumsy, it is a valuable asset in a pinch due to the sheer number of commands to remember. For example, a base install of Windows 10 includes Windows PowerShell 5.1 which features more than 1,500 cmdlets. As you install additional PowerShell modules, you make more cmdlets available.

There are many PowerShell books, but do you really need them? There are extensive libraries of PowerShell code that are free to manipulate and use. Even walking through a Microsoft wizard gives the option to create the PowerShell code for the wizard you just ran. As you learn where to find PowerShell code, it becomes less of a process to write a script from scratch but more of a modification of existing code. You don’t have to be an expert; you just need to know how to manipulate the proper fields and areas.

Outside of typos, the biggest stumbling block for most beginners is not reading the screen. PowerShell does a mixed job with its error messages. The type is red when something doesn’t work, and PowerShell will give the line and character where the error occurred.

In the example in Figure 2, PowerShell threw an error due to the extra letter s at the end of the command Get-WindowsFeature. The system didn’t recognize the command, so it tagged the entire command rather than the individual letter, which can be frustrating for beginners.

PowerShell error message
Figure 2. When working with PowerShell on the command line, you don’t get precise locations of where an error occurred if you have a typo in a cmdlet name.

The key is to review your code closely, then review it again. If the command doesn’t work, you have to fix it to move forward. It helps to stop and take a deep breath, then slowly reread the code. Copying and pasting a script from the web isn’t foolproof and can introduce an error. With some time and patience, and some fundamental PowerShell knowledge of the commands, you can get moving with it a lot quicker than you might have thought.

Go to Original Article
Author:

AI at the core of next-generation BI

Next-generation BI is upon us, and has been for a few years now.

The first generation of business intelligence, beginning in the 1980s and extending through the turn of the 21st century, relied entirely on information technology experts. It was about business reporting, and was inaccessible to all but a very few with specialized skills.

The second introduced self-service analytics, and lasted until just a few years ago. The technology was accessible to data analysts, and defined by data visualization, data preparation and data discovery.

Next-generation BI — the third generation — is characterized by augmented intelligence, machine learning and natural language processing. It’s open everyday business users, and trust and transparency are important aspects. It’s also changing the direction data looks, becoming more predictive.

In September, Constellation Research released “Augmented Analytics: How Smart Features Are Changing Business Intelligence.The report, authored by analyst Doug Henschen, took a deep look at next-generation BI.

Henschen reflected on some of his findings about the third generation of business analytics for a two-part Q&A.

In Part I, Henschen addressed what marked the beginning of this new era and who stands to benefit most from augmented BI capabilities. In Part II, he looked at which vendors are positioned to succeed and where next-generation business intelligence is headed next.

In your report you peg 2015 as the beginning of next generation BI — what features were you seeing in analytics platforms at that time that signaled a new era?

Doug HenschenDoug Henschen

Doug Henschen: There was a lot percolating at the time, but I don’t think that it’s about a specific technology coming out in 2015. That’s an approximation of when augmented analytics really became something that was ensconced as a buying criteria. That’s I think approximately when we shifted — the previous decade was really when self-service became really important and the majority of deployments were driving toward it, and I pegged 2015 as the approximate time at which augmented started getting on everyone’s radar.

Beyond the technology itself, what were some things that happened in the market around the time of 2015 that showed things were changing?

Henschen: There were lots of technology things that led up to that — Watson playing Jeopardy was in 2011, SAP acquired KXEN in 2013, IBM introduced Watson Analytics in 2014. Some startups like ThoughtSpot and BeyondCore came in during the middle of the decade, Salesforce introduced Einstein in 2016 and ended up acquiring BeyondCore in 2016. A lot of stuff was percolating in the decade, and 2015 is about when it became about, ‘OK, we want augmented analytics on our list. We want to see these features coming up on roadmaps.’

What are you seeing now that has advanced next-generation BI beyond what was available in 2015?

Anything that is proactive, that provides recommendations, that helps automate work that was tedious, that surfaces insights that humans would have a tough time recognizing but that machines can recognize — that’s helpful to everybody.
Doug HenschenAnalyst, Constellation Research

Henschen: In the report I dive into four areas — data preparation, data discovery and analysis, natural language interfaces and interaction, and forecasting and prediction — and in every category you’ve seen certain capabilities become commonplace, while other capabilities have been emerging and are on the bleeding edge. In data prep, everyone can pretty much do auto data profiling, but recommended or suggested data sources and joins are a little bit less common. Guided approaches that walk you through how to cleanse this, how to format this, where and how to join — that’s a little bit more advanced and not everybody does it.

Similarly, in the other categories, recommended data visualization is pretty common in discover and analysis, but intent-driven recommendations that track what individuals are doing and make recommendations based on patterns among people are more on the bleeding edge. It applies in every category. There’s stuff that is now widely done by most products, and stuff that is more bleeding edge where some companies are innovating and leading.

Who benefits from next-generation BI that didn’t benefit in previous generations — what types of users?

Henschen: I think these features will benefit all. Anything that is proactive, that provides recommendations, that helps automate work that was tedious, that surfaces insights that humans would have a tough time recognizing but that machines can recognize — that’s helpful to everybody. It has long been an ambition in BI and analytics to spread this capability to the many, to the business users, as well as the analysts who have long served the business users, and this extends the trend of self-service to more users, but it also saves time and supports even the more sophisticated users.

Obviously, larger companies have teams of data analysts and data engineers and have more people of that sort — they have data scientists. Midsize companies don’t have as many of those assets, so I think [augmented capabilities] stand to be more beneficial to midsize companies. Things like recommended visualizations and starting points for data exploration, those are very helpful when you don’t have an expert on hand and a team at your disposal to develop a dashboard to address a problem or look at the impact of something on sales. I think [augmented capabilities] are going to benefit all, but midsize companies and those with fewer people and resources stand to benefit more.  

You referred to medium-sized businesses, but what about small businesses?

Henschen: In the BI and analytics world there are products that are geared to reporting and helping companies at scale. The desktop products are more popular with small companies — Tableau, Microsoft Power BI, Tibco Spotfire are some that have desktop options, and small companies are turning also to SaaS options. We focus on enterprise analytics — midsize companies and up — and I think enterprise software vendors are focused that way, but there are definitely cloud services, SaaS vendors and desktop options. Salesforce has some good small business options. Augmented capabilities are coming into those tools as well.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author: