Tag Archives: tool

DevOps pros rethink cloud cost with continuous delivery tool

Enterprise DevOps pros can slash cloud resource overallocations with a new tool that shows them how specific app resources are allocated and used in the continuous delivery process.

The tool, Continuous Efficiency (CE), became generally available this week from Harness.io, a continuous delivery (CD) SaaS vendor. It can be used by itself or integrated with the company’s CD software, which enterprises use to automatically deploy and roll back application changes to Kubernetes infrastructure.

In either case, CE correlates cloud cost information with specific applications and underlying microservices without requiring manual tagging, which made it easy for software engineers at beta tester companies to identify idle cloud resources.

Jeff GreenJeff Green

“The teams running applications on our platform are distributed, and there are many different teams at our company,” said Jeff Green, CTO at Tyler Technology, a government information systems software maker headquartered in Plano, Texas. “We have a team that manages the [Kubernetes] cluster and provides guidelines for teams on how to appropriately size workloads, but we did find out using CE that we were overallocating resources.”

In beta tests of CE, Tyler Technologies found that about one-third of its cloud resources were not efficiently utilized — capacity had been allocated and never used, or it was provisioned as part of Kubernetes clusters but never allocated. Developers reduced the number of Kubernetes replicas and CPU and memory allocations after this discovery. Green estimated those adjustments could yield the company some $100,000 in cloud cost savings this year.

Harness Continuous Efficiency dashboard
Harness Continuous Efficiency tool correlates cloud costs to applications, services and Kubernetes clusters.

DevOps puts cloud cost on dev to-do list

Tyler Technologies has used Harness pipelines since 2017 to continuously deploy and automatically roll back greenfield applications that run on Kubernetes clusters in the AWS cloud. The full lifecycle of these applications is managed by developers, who previously didn’t have direct visibility into how their apps used cloud resources, or experience with cloud cost management. CE bridged that gap without requiring developers to manage a separate tool or manually tag resources for tracking.

This has already prompted developers at Tyler Technologies to focus more on cost efficiencies as they plan applications, Green said.

“That wasn’t something they really thought about before,” he said. “Until very recently, we followed a more traditional model where we had dedicated operations people that ran our data centers, and they were the ones that were responsible for optimizing and tuning.”

While developer visibility into apps can be helpful, a tool such as CE doesn’t replace other cloud cost management platforms used by company executives and corporate finance departments.

“It’s good for developers to be cognizant of costs and not feel like they’re being blindsided by impossible mandates from a perspective they don’t understand,” said Charles Betz, analyst at Forrester Research. “But in large enterprises, there will still be dedicated folks managing cloud costs at scale.”

The Harness CD tool deploys delegates, or software agents, to each Kubernetes cluster to carry out and monitor app deployments. CE can use those agents to identify the resources that specific apps and microservices use and compare this information to resource allocations in developers’ Kubernetes manifests, identifying idle and unallocated resources.

If users don’t have the Harness CD tool, CE draws on information from Kubernetes autoscaling data and associates it with specific microservices and applications. In either case, developers don’t have to manually tag resources, which many other cloud cost tools require.

This was a plus for Tyler Technologies, but Betz also expressed concern about the reliability of auto-discovery. 

“There’s no way to map objective tech resources to subjective business concepts without some false negatives or positives that could result in the wrong executive being charged for the wrong workload,” Betz said. “Tagging is a discipline that organizations ultimately can’t really get away from.”

Harness roadmap includes cloud cost guidance

Tyler Technologies plans to add the CE product to Harness when it renews its license this year but hasn’t yet received a specific pricing quote for the tool. Harness officials declined to disclose specific pricing numbers but said that CE will have a tiered model that charges between 1% and 5% of customers’ overall cloud spending, depending on whether the cloud infrastructure is clustered or non-clustered.

“It’s not quite free money — there is a charge for this service,” Green said. “But it will allow us to save costs we wouldn’t even be aware of otherwise.”

It will allow us to save costs we wouldn’t even be aware of otherwise.
Jeff GreenCTO, Tyler Technologies

Harness plans to add recommendation features to CE in a late July release, which will give developer teams hints about how to improve cloud cost efficiency. In its initial release, developers must correct inefficiencies themselves, which Tyler’s Green said would be easier with recommendations. 

“We use an AWS tool that recommends savings plans and how to revise instances for cost savings,” Green said. “We’d like to see that as part of the Harness tool as well.”

Other Harness users that previewed CE, such as Choice Hotels, have said they’d also like to see the tool add proactive cloud cost analysis, but Green said his team uses CE in staging environments to generate such estimates ahead of production deployments.

Harness plans to add predictive cost estimates based on what resources are provisioned for deployments, a company spokesperson said. The Continuous Efficiency platform already forecasts cloud costs for apps and clusters, and later releases will predict usage based on seasonality and trends.

Go to Original Article

How to Install or Disable Hyper-V in Windows 10

In this article, I will write about a familiar-sounding tool I regularly use to prepare custom images for Azure amongst other tasks. Windows 10 comes with the Windows Client Version of Hyper-V with it built-in so there is no need to download anything extra! It is the same Hyper-V you use within the Server but without the cluster features. Here’s how to configure Hyper-V for Windows 10.

Operating System Prerequisites

First, let us check the prerequisites.

Windows 10 Licensing

Not every license of Windows 10 has Hyper-V enabled. Only the following versions are eligible for Windows 10 Hyper-V.

  • Windows 10 Professional
  • Windows 10 Enterprise
  • Windows 10 Education

You can find your installed Windows Version when using PowerShell and following command.

The screenshot below shows the PowerShell output.

Windows 10 licencing, PowerShell

Let us follow up with the hardware prerequisites,

Hardware prerequisites

There are two parts to be considered. First the hardware configuration and second the BIOS and UEFI setup.

Hardware configuration

  • 64-bit Processor with Second Level Address Translation (SLAT).
  • CPU support for VM Monitor Mode Extension (VT-x on Intel CPU’s).
  • Minimum of 4 GB memory. As virtual machines share memory with the Hyper-V host, you will need to provide enough memory to handle the expected virtual workload.

The screenshot shows my system as an example.

Windows 10 Basic System Information

BIOS / UEFI Configuration

You need to enable two Options in your system BIOS / UEFI.

  • Virtualization Technology – may have a different label depending on the motherboard manufacturer.
  • Hardware Enforced Data Execution Prevention.

You can find these options in the CPU Settings of your system. See the screenshot below as an example.

BIOS UEFI CPU settings Windows 10

How to check the hardware compatibility

To verify hardware compatibility in Windows, open the PowerShell and type systeminfo. 

Windows 10 PowerShell System Info

If all listed requirements in the output are showing “yes”, your system is compatible with Hyper-V.

How to Install or Disable Hyper-V in Windows 10

How to install Hyper-V in Windows 10

When all hardware and license requirements are met, you can start the installation of Hyper-V in your Windows.

The easiest way is to search for Hyper-V in the Start Menu. It will then point to “Turn On or Off Windows Feature” Window in the Control Panel

Hyper-V Installation Windows 10

Within the context menu, you enable the Hyper-V feature together with the Platform and Management Tools.

Hyper-V Platform and Management Tools

Afterward, your system will require a reboot.

Windows 10 reboot

After the reboot, you should be able to open the Hyper-V Manager on your system and start to configure Hyper-V.

WIndows 10 start menu, Hyper-V

Hyper-V Manager WIndows 10

That’s all you need to do in order to install Hyper-V on your Windows 10 system.

How to disable Hyper-V in Windows

To disable Hyper-V is again pretty simple. You go the back to the “Turn Windows Features on or off” section in the control panel.

Turn Windows Features on or off control panel

Remove the checkmark from the Hyper-V checkbox.

Disable Hyper-V Windows 10

Reboot your Windows System and you are done.

Rebooting Windows 10 after removing Hyper-V

Closing notes

Hyper-V in Windows 10 can be a pretty good tool for some certain administrative or daily task e.g.:

  • Spinning up a VM to test certain software
  • Using VMs to open suspicious files
  • Create an encapsulated work environment on your PC or Notebook when you work from home
  • Create custom images for VDI environments like Citrix or Windows Virtual Desktop
  • Open backups from VMs and extract certain files
  • etc.

I hope the blogpost will help some of you to become familiar with Hyper-V and the management tools. If there is anything you wish to ask, let me know in the comments below and I’ll get back to you!

Go to Original Article
Author: Florian Klaffenbach

New ‘Thanos’ ransomware weaponizes RIPlace evasion technique

Threat researchers at Recorded Future discovered a new ransomware-as-a-service tool, dubbed “Thanos,” that is the first to utilize the evasion technique known as RIPlace.

Thanos was put on sale as a RaaS tool “with the ability to generate new Thanos ransomware clients based on 43 different configuration options,” according to the report published Wednesday by Recorded Future’s Insikt Group.

Notably, Thanos is the first ransomware family to advertise its optional utilization of RIPlace, a technique introduced through a proof-of-concept (PoC) exploit in November 2019 by security company Nyotron. At its release, RIPlace bypassed most existing ransomware defense mechanisms, including antivirus and EDR products. But despite this, the evasion wasn’t considered a vulnerability because it “had not actually been observed in ransomware at the time of writing,” Recorded Future’s report said.

As reported by BleepingComputer last November, only Kaspersky Lab and Carbon Black modified their software to defend against the technique. But since January, Recorded Future said, “Insikt Group has observed members of dark web and underground forums implementing the RIPlace technique.”

According to its report on RIPlace, Nyotron discovered that file replacement actions using the Rename function in Windows could be abused by calling DefineDosDevice, which is a legacy function that creates a symbolic link or “symlink.”

Thanos RIPlace
Recorded Future shows how the RIPlace proof-of-concept exploit was adopted by a new ransomware-as-a-service tool known as Thanos.

Lindsay Kaye, director of operational outcomes for Recorded Future’s Insikt Group, told SearchSecurity that threat actors can use the MS-DOS device name to replace an original file with an encrypted version of that file without altering most antivirus programs.

“As part of the file rename, it called a function that is part of the Windows API that creates a symlink from the file to an arbitrary device. When the rename call then happens, the callback using this passed-in device path returns an error; however, the rename of the file succeeds,” Kaye said. “But if the AV detection doesn’t handle the callback correctly, it would miss ransomware using this technique.”

Insikt Group researchers first discovered the new Thanos ransomware family in January on an exploit forum. According to the Recorded Future report, Thanos was developed by a threat actor known as “Nosophoros” and has code and functions that are similar to another ransomware variant known as Hakbit.

While Nyotron’s PoC was eventually weaponized by the Thanos threat actors, Kaye was in favor of the vendor’s decision to publicly release RIPlace last year.

“I think at the time, publicizing it was great in that now antivirus companies can say great, now let’s make sure it’s something we’re detecting because if someone’s saying here’s a new technique, threat actors are going to take advantage of it so now it’s something that’s not going to be found out after people are victimized. It’s out in the open and companies can be aware of it,” Kaye said.

Recorded Future’s report noted that Thanos appears to have gained traction within the threat actor community and will continue to be deployed and weaponized by both individual cybercriminals and collectives through its RaaS affiliate program.

Go to Original Article

Hiring, firing and cancel culture

Social media has become such a blunt tool, it may force employers to reveal information they would normally keep confidential. That’s what happened in response to the widely publicized confrontation between a dog walker and a birder in New York’s Central Park.

Once the video of the incident between birder Chris Cooper and Amy Cooper (no relation) was published, cancel culture spun into action. The name of Amy Cooper’s employer was broadcast, and Twitter users urged Franklin Templeton, an investment firm, to fire her.

The next day, Franklin Templeton disclosed in a tweet that it had fired Amy Cooper, an unusual action for a firm in the private sector where terminations are normally kept confidential. Legal experts say, however, employee privacy, with respect to terminations, is not protected.

There is another side to online activity for employees and HR managers alike. Social media users leave behind digital records that can be collected and scrutinized by prospective and current employers.

Tech companies see an opportunity and are building tools to help HR departments do just that. The tools can assess public behavior on social media for toxic statements, such as sexist or racist comments. By automating this process, the tools can speed up what recruiters often do manually: help screen job candidates and keep tabs on how existing employees behave online.

Nannina AngioniNannina Angioni

With job candidates, “it’s best practice today to do a search on social media,” said Nannina Angioni, a labor and employment attorney and partner at the law firm Kaedian LLP in Los Angeles. “There are so many people who do post things online for public viewing that would go to suitability,” she said. 

Employee privacy and digital footprints

There are technologies emerging that might scrape a person’s digital footprint and “translate that into a quantitative estimate of whether you fit with a certain role or not,” said Tomas Chamorro-Premuzic, a professor of business psychology at Columbia University and chief talent scientist at ManpowerGroup, a staffing and professional services firm in Milwaukee.

Tomas Chamorro-PremuzicTomas Chamorro-Premuzic

Chamorro-Premuzic believes technologies that can assess social media posts of job candidates and employees will rise in use. “There’s a lot of scientific research that suggests it can definitely be done. Whether it should be done is a different issue,” he said.

Some of the AI-enabled tools that Chamorro-Premuzic discussed claim to analyze personality based on a person’s digital persona. For instance, Crystal Project, Inc., in Nashville, Tenn., has a tool called Crystal Knows, which claims to determine personality from LinkedIn profiles. The company offers a Chrome extension and a basic free version, and it says it can be used by recruiters as well as job applicants who are researching a manager they might be reporting to.

Other vendors, such as Fama Technologies Inc., in Los Angeles, analyze social media posts from job candidates and employees for problematic behavior. In a recent blog post, it provided guidance to firms on how to roll this technology out as part of a “workplace toxicity reduction effort.” Bullying and harassment are examples of toxic behavior that can be disruptive in the workplace. The behavior may be evident in social media postings of a job candidate or employee.

David LewisDavid Lewis

Researching social media posts is already “more prevalent than people realize,” said David Lewis, president and CEO of OperationsInc, an HR consulting firm in Norwalk, Conn. Headhunters are likely to conduct this kind of social media research, he said. 

Job applicants won’t learn the role social media played if they don’t get an offer. “There is no public disclosure about this in any way, shape or form,” Lewis said. 

Some firms may not review social media posts and will stick to job interviews and other evaluation methods, Lewis said. But he believes interviews may not reveal enough about a job applicant. 

Employers probe social media

If something is in the public domain, such as an applicant’s Twitter feed, “Don’t you owe it to yourself and your company to look at it, to get more of an understanding of that person?” Lewis said. 

There are risks for employers that examine social media accounts. 

“The risk arises where you’ve got somebody who has a public social media profile and they are disclosing private things,” such as a medical diagnosis, sexuality or religion, Angioni said.

“Candidates have protected rights just the same as employees,” Angioni said. Those rights include not being discriminated against, she said.

But employee privacy can be tenuous, something illustrated by Franklin Templeton. Terminations of employees are, by practice, private, but legal experts say there’s nothing to stop firms from announcing a firing except the risk of litigation. 

Rebecca BakerRebecca Baker

“Generally speaking, there are no laws that say an employer specifically and directly can’t disclose circumstances around an individual’s employment separation,” said Rebecca Baker, a labor and employment attorney at Bracewell LLP in Houston. But they don’t do so because of a risk of a legal claim, such as defamation, she said. 

Another legal claim could be around tortious interference, or intentional interference with someone’s ability to get future work, said Jessica Post, employment and labor practice group leader at Fennemore Craig P.C. in Phoenix.

Jessica PostJessica Post

The video and Amy Cooper’s apology could make any claims against the firm difficult, Post said. Nonetheless, the firm took a risk in making the firing public, she said.

Legal experts see Franklin Templeton’s move as calculated, taking into consideration the legal risk versus the potential backlash from clients and employees. The investment firm didn’t respond to a request for comment. 

At a Bloomberg virtual conference last week, Jenny Johnson, president and CEO at Franklin Resources Inc., was asked how its employees and clients responded to the firing decision. 

“I would say the overwhelming response was supportive and there was a segment that felt it was unfair — but that was a small minority segment,” Johnson said. “And we have to make those decisions based on our core values. We’ve always said we have zero tolerance for any kind of racism, and so we felt that it was important to make that decision.”

David KurtzDavid Kurtz

“The best practice when you’re an employer and you’re terminating somebody is, first of all, not to disclose it publicly at all,” said David Kurtz, an employment attorney at Constangy, Brooks, Smith & Prophete LLP in Boston. And even within the workplace, disclosing the reasons for a termination should be limited to senior management or those with a real need to know, he said.

Go to Original Article

New SoftIron management tool targets Ceph storage complexity

Startup SoftIron released a new HyperDrive Storage Manager tool that aims to make open source Ceph software-defined storage, and the hardware it runs on, easier to use.

London-based SoftIron designs, builds and assembles dedicated HyperDrive appliances for Ceph software-defined storage at its manufacturing facility in Newark, Calif. Now SoftIron has developed a tool to assist system administrators in managing the software and hardware in their Ceph storage clusters.

“We’re integrating it in the way that you would normally only see in a proprietary vendor’s storage,” said Andrew Moloney, SoftIron’s vice president of strategy.

Moloney said the new HyperDrive Storage Manager could automatically discover and deploy new nodes without the user having to resort to the command-line interface. If a drive goes down, the graphical user interface can pinpoint the physical location, and users can see a flashing light next to the drive in the appliance. HyperDrive Storage Manager also can lock out multiple administrators to prevent conflicting commands, Moloney said.

“Many of those things have not been addressed and can’t be addressed if you’re not looking at the hardware and the software as one entity,” Moloney said.

Enrico Signoretti, a research analyst at GigaOm, said one of the biggest problems with Ceph is complexity. The optimized SoftIron software/hardware stack and improved graphical user interface should help to lower the barrier for enterprises to adopt Ceph, Signoretti said.

SoftIron HyperDrive
SoftIron’s HyperDrive Storage Manager tool aims to ease the management of open source Ceph storage.

SoftIron started shipping its ARM-based HyperDrive appliances for Ceph about a year ago. Appliances are available in all-flash, all-disk and hybrid configurations. The most popular model is the 120 TB HyperDrive Density Storage Node with spinning disks and solid-state drives, according to Moloney. He said the average deployment is about 1 PB.

SoftIron has about 20 customers using HyperDrive in areas such as high-performance computing, analytics and data-intensive research projects. Customers include the University of Minnesota’s Supercomputing Institute, the University of Kentucky, national laboratories, government departments, and financial service firms, Moloney said.

SoftIron’s competition includes Ambedded Technology, a Taiwanese company that also makes an ARM-based Ceph Storage Appliance, as well as Red Hat and SUSE, which both offer supported versions of Ceph and tested third-party server hardware options.

Dennis Hahn, principal storage analyst at Omdia, said Red Hat and SUSE tend to focus on enterprise and traditional data centers, and SoftIron could find opportunities with smaller data centers and edge deployments for use cases such as retail, healthcare and industrial automation, with sensors gathering data.

Hahn said customers often look for lower-cost storage with edge use cases, and SoftIron’s HyperDrive appliances could play well there, with its AMD’s ARM processors that generally cost less than Intel’s x86.

Moloney said that Ceph can be “quite hardware sensitive” for anyone trying to get the best performance out of it. Citing an example, he said that SoftIron found it could optimize I/O and dramatically improve performance with an ARM64 processor by directly attaching all 14 storage drives. Moloney said that SoftIron also saw that using an SSD just for journaling and spinning media for storage could boost performance at the “right price point.”

Those who assume that software-defined data center technologies — whether storage, network or compute — can run great on “any kind of vanilla hardware” will be disappointed, Moloney said.

“In reality, there are big sacrifices that you make when you decide to do that, especially in open source, when you think about performance and efficiency and scalability,” Moloney said. “Our mission and our vision is about redefining that software-defined data center. The way we believe to do that is to run open source on what we call task-specific appliances.”

In addition HyperDrive storage, SoftIron plans to release a top-of-rack HyperSwitch, based on the open source SONiC network operating system, and a HyperCast transcoding appliance, using open source FFmpeg software for audio and video processing, within the next three months. Moloney said SoftIron is now “hitting the gas” and moving into an expansion phase since receiving $34 million in Series B funding in March, when he joined the company.

Go to Original Article

IBM’s Watson AIOps aims to help networks run smoothly

Entering a crowded marketplace, IBM launched Watson AIOps, a tool aimed at helping CIOs automatically keep their IT networks running smoothly.

The new product, released at the tech giant’s IBM Think 2020 conference, held virtually, comes as much of America’s workforce is working remotely as the COVID-19 pandemic has forced organizations across the country to close offices. As enterprises move more, and in some cases, entirely, online, many are finding that network stability is even more important now than before the public health crisis.

Managing networks

“Watson AIOps addresses the need for IT departments to solve problems remotely and drive more automation into inefficient processes,” said Nick McQuire, senior vice president and head of AI and enterprise research at CCS Insight.

The AIOps system enables enterprises to add automation at the infrastructure level to detect, diagnose and fix IT anomalies. It aims to help CIOs build and manage more responsive, intelligent and longer-lasting networks, IBM said.

“We want to arm every CIO in the world to use AI to predict problems before they happen, fix problems before they happen,” and quickly address the problems that do arise, said Rob Thomas, senior vice president of IBM Cloud and Data Platform, during a conference call with media.

“The CIO needs a powerful AI … helping to run the operation,” he continued.

Watson AIOps, which comes out of the IBM Research division, is built on the latest release of Red Hat OpenShift, enabling it to run across hybrid cloud environments. Through a variety of vendor partnerships, the product can work in concert with workplace tools, including Slack and Box, as well as with providers of traditional IT monitoring platforms, including ones from online chat service vendor Mattermost and ServiceNow.

The right time

For McQuire, Watson AIOps, IBM’s first significant move into the AIOps market, comes at a time when enterprises are struggling with the economic impact of the coronavirus, and are looking for new technologies and practical strategies to survive.

Arvind Krishna, IBM CEOArvind Krishna

IBM also appears to be basing its strategy with the AIOps technology on that worldview. Arvind Krishna, IBM’s new CEO, said Tuesday during a live-streamed keynote from the virtual conference that the pandemic is an opportunity to develop new solutions, partnerships and ways of working.

The pandemic has shown the need for technological change and will ultimately accelerate enterprises’ digital transformations and AI adoption, Krishna said.

“I strongly believe that AI can play a critical role in assisting clients during this uncertain time,” he said.

A crowded market

Still, despite the decisive move by IBM, several of IBM’s key competitors already have similar tools to automate business processes, McQuire noted.

Vendors such as Cisco, Moogsoft and Splunk, for example, have long provided advanced AIOps capabilities. At the same time, major cloud vendors, including Microsoft and AWS, offer AIOps tools on their platforms or maintain close partnerships with AIOps vendors.

Watson AIOps “will undoubtedly raise the temperature of the AI wars between the major cloud vendors as technologies like process automation become a key battleground,” McQuire said.

More AI tools

Timed with the release of Watson AIOps, IBM also revealed Accelerator for Application Modernization with AI, a new capability within IBM’s Cloud Modernization service designed to reduce the effort and costs associated with application modernization.

Watson AIOps addresses the need for IT departments to solve problems remotely and drive more automation into inefficient processes.
Nick McQuireAnalyst, CCS Insight

The new product, through several AI-powered tools can plot the best path for application optimization, help make applications cloud-ready, and automatically understand legacy code to recommend microservices that can shorten the time to modernizing older applications.

The company also revealed updates to a few of its AI and automation platforms.

IBM Cloud Pak for Data 3.0 added several extensions to the data and AI platform, including IBM Planning Analytics, an automated planning, budgeting and forecasting tool.

Meanwhile, a recent update to IBM Cloud Pak for Automation, a platform for designing, building and running automation applications, simplifies the building of automated workers. With the update, users can more easily develop and manage employees working remotely, as well as quickly assign simple jobs to them, such as invoicing.

Go to Original Article

Alteryx 2020.1 highlighted by new data profiling tool

Holistic Data Profiling, a new tool designed to give business users a complete view of their data while in the process of developing workflows, highlighted the general availability of Alteryx 2020.1 on Thursday.

Alteryx, founded in 1997 and based in Irvine, Calif., is an analytics and data management specialist, and Alteryx 2020.1 is the vendor’s first platform update in 2020. It released its most recent update, Alteryx 2019.4, in December 2019, featuring a new integration with Tableau.

The vendor revealed the platform update in a blog post; in addition to Holistic Data Profiling, it includes 10 new features and upgrades. Among them are new language toggling feature in Alteryx Designer, the vendor’s data preparation product.

“The other big highlights are more workflow efficiency features,” said Ashley Kramer, Alteryx’s senior vice president of product management. “And the fact that Designer now ships with eight languages that can quickly be toggled without a reinstall is huge for our international customers.”

Holistic Data Profiling is a low-code/no-code feature that gives business users an instantaneous view of their data to help them better understand their information during the data preparation process — without having to consult a data scientist.

After dragging a Browse Tool — Alteryx’s means of displaying data from a connected tool as well as data profile information, maps, reporting snippets and behavior analysis information — onto Alteryx’s canvas, Holistic Data Profiling provides an immediate overview of the data.

Holistic Data Profiling is aimed to help business users understand data quality and how various columns of data may be related to one another, spot trends, and compare one data profile to another as they curate their data.

An overview of an organization's data is displayed in a sample Holistic Data Profiling gif from Alteryx.
A sample Holistic Data Profiling gif from Alteryx gives an overview of an organization’s data.

Users can zoom in on a certain column of data to gain deeper understanding, with Holistic Data Profiling providing profile charts and statistics about the data such as the type, quality, size and number of records.

That knowledge will subsequently inform how to proceed to the next move in order to ultimately make a data-driven decision.

It’s easy to get tunnel vision when analyzing data. Holistic Data Profiling enables end users — via low-code/no-code tooling — to quickly gain a comprehensive understanding of the current data estate.
Mike LeoneAnalyst, Enterprise Strategy Group

“It’s easy to get tunnel vision when analyzing data,” said Mike Leone, analyst at Enterprise Strategy Group. “Holistic Data Profiling enables end users — via low-code/no-code tooling — to quickly gain a comprehensive understanding of the current data estate. The exciting part, in my opinion, is the speed at which end users can potentially ramp up an analytics project.”

Similarly, Kramer noted the importance of being able to more fully understand data before the final stage of analysis.

“It is really important for our customers to see and understand the landscape of their data and how it is changing every step of the way in the analytic process,” she said.

Alteryx customers were previously able to view their data at any point — on a column-by-column or defined multi-column basis — but not to get a complete view, Kramer added.

“Experiencing a 360-degree view of your data with Holistic Data Profiling is a brand-new feature,” she said.

In addition to Holistic Data Profiling, the new language toggle is perhaps the other signature feature of the Alteryx platform update.

Using Alteryx Designer, customers can now switch between eight languages to collaborate using their preferred language.

Alteryx previously supported multiple languages, but for users to work in their preferred language, each individual user had to install Designer in that language. With the updated version of Designer, they can click on a new globe icon in their menu bar and select the language of their choice to do analysis.

“To truly enable enterprise-wide collaboration, breaking down language barriers is essential,” Leone said. “And with Alteryx serving customers in 80 different countries, adding robust language support further cements Alteryx as a continued leader in the data management space.”

Among the other new features and upgrades included in Alteryx 2020.1 are a new Power BI on-premises loader that will give users information about Power BI reports and automatically load those details into their data catalog in Alteryx Connect; the ability to input selected rows and columns from an Excel spreadsheet; and new Virtual Folder Connect to save custom queries.

Meanwhile, a streamlined loader of big data from Alteryx to the Snowflake cloud data warehouse is now in beta testing.

“This release and every 2020 release will have a balance of improving our platform … and fast-forwarding more innovation baked in to help propel their efforts to build a culture of analytics,” Kramer said.

Go to Original Article

Why move to PowerShell 7 from Windows PowerShell?

PowerShell’s evolution has taken it from a Windows-only tool to a cross-platform, open source project that runs on Mac and Linux systems with the release of PowerShell Core. Next on tap, Microsoft is unifying PowerShell Core and Windows PowerShell with the long-term supported release called PowerShell 7, due out sometime in February. What are advantages and disadvantages of adopting the next generation of PowerShell in your environment?

New features spring from .NET Core

Nearly rebuilt from the ground up, PowerShell Core is a departure from Windows PowerShell. There are many new features, architectures and improvements that push the language forward even further.

Open source PowerShell runs on a foundation of .NET Core 2.x in PowerShell 6.x and .NET Core 3.1 in PowerShell 7. The .NET Core framework is also cross-platform, which enables PowerShell Core to run on most operating systems. The shift to the .NET Core framework brings several important changes, including:

  • increases in execution speed;
  • Window Desktop Application support using Windows Presentation Foundation and Windows Forms;
  • TLS 1.3 support and other cryptographic enhancements; and
  • API improvements.

PowerShell Core delivers performance improvements

As noted in the .NET Core changes, execution speed has been much improved. With each new release of PowerShell Core, there are further improvements to how core language features and built-in cmdlets alike work.

PowerShell Core speed improvement
A test of the Group-Object cmdlet shows less time is needed to execute the task as you move from Windows PowerShell to the newer PowerShell Core versions.

With a simple Group-Object test, you can see how much quicker each successive release of PowerShell Core has become. With a nearly 73% speed improvement from Windows PowerShell 5.1 to PowerShell Core 6.1, running complex code in gets easier and completes faster.

Sort-Object test results
Another speed test with the Sort-Object cmdlet shows a similar improvement with each successive release of PowerShell.

Similar to the Group-Object test, you can see Sort-Object testing results in nearly a doubling of execution speed between Windows PowerShell 5.1 and PowerShell Core 6.1. With sorting so often used in many applications, using PowerShell Core in your daily workload means that you will be able to get that much more done in far less time.

Gaps in cmdlet compatibility addressed

The PowerShell team began shipping the Windows Compatibility Pack for .NET Core starting in PowerShell Core 6.1. With this added functionality, the biggest reason for holding back from greater adoption of PowerShell Core is no longer valid. The ability to run many cmdlets that previously were only available to Windows PowerShell means that most scripts and functions can now run seamlessly in either environment.

PowerShell 7 will further cinch the gap by incorporating the functionality of the current Windows Compatibility Module directly into the core engine.

New features arrive in PowerShell 7

There are almost too many new features to list in PowerShell 7, but some of the highlights include:

  • SSH-based PowerShell remoting;
  • an & at the end of pipeline automatically creates a PowerShell job in the background;
  • many improvements to web cmdlets such as link header pagination, SSLProtocol support, multipart support and new authentication methods;
  • PowerShell Core can use paths more than 260 characters long;
  • markdown cmdlets;
  • experimental feature flags;
  • SecureString support for non-Windows systems; and
  • many quality-of-life improvements to existing cmdlets with new features and fixes.

Side-by-side installation reduces risk

A great feature of PowerShell Core, and one that makes adopting the new shell that much easier, is the ability to install the application side-by-side with the current built-in Windows PowerShell. Installing PowerShell Core will not remove Windows PowerShell from your system.

Instead of invoking PowerShell using the powershell.exe command, you use pwsh.exe instead — or just pwsh in Linux. In this way, you can test your scripts and functions incrementally before moving everything over en masse.

This feature allows quicker updating to new versions rather than waiting for a Windows update. By decoupling from the Windows release cycle or patch updates, PowerShell Core can now be regularly released and updated easily.

Disadvantages of PowerShell Core

One of the biggest drawbacks to PowerShell Core is losing the ability to run all cmdlets that worked in Windows PowerShell. There is still some functionality that can’t be fully replicated by PowerShell Core, but the number of cmdlets that are unable to run is rapidly shrinking with each release. This may delay some organizations move to PowerShell Core, but in the end, there won’t be a compelling reason to stay on Windows PowerShell with the increasing cmdlet support coming to PowerShell 7 and beyond.

Getting started with the future of PowerShell

PowerShell Core is released for a wide variety of platforms, Linux and Windows alike. Windows offers MSI packages that are easily installable, while Linux packages are available for a variety of different package platforms and repositories.

Simply starting the shell using pwsh will let you run PowerShell Core without disrupting your current environment. Even better is the ability to install a preview version of the next iteration of PowerShell and run pwsh-preview to test it out before it becomes generally available.

Go to Original Article

New Amazon Kendra AI search tool indexes enterprise data

LAS VEGAS — Amazon Kendra, a new AI-driven search tool from the tech giant, is designed to enable organizations to automatically index business data, making it easily searchable using keywords and context.

Revealed during a keynote by AWS CEO Andy Jassy at the re:Invent 2019 user conference here,  Kendra relies on machine learning and natural language processing (NLP) to bring enhanced search capabilities to on-premises and cloud-based business data. The system is in preview.

“Kendra is enterprise search technology,” said Forrester analyst Mike Gualtieri. “But, unlike enterprise search technology of the past, it uses ML [machine learning] to understand the intent of questions and return more relevant results.”

Cognitive search

Forrester, he said, calls this type of technology “cognitive search.” Recent leaders in that market, according to a Forrester Wave report Gualtieri helped write, include intelligent search providers Coveo, Attivio, IBM, Lucidworks, Mindbreeze and Sinequa. Microsoft was also ranked highly in the report, which came out in May 2019. AWS is a new entrant in the niche.

“Search is often an area customers list as being broken especially across multiple data stores whether they be databases, office applications or SaaS,” said Nick McQuire, vice president at advisory firm CCS Insight.

Unlike enterprise search technology of the past, [Kendra] uses ML to understand the intent of questions and return more relevant results.
Mike GualtieriAnalyst, Forrester

While vendors such as IBM and Microsoft have similar products, “the fact that AWS is now among the first of the big tech firms to step into this area illustrates the scale of the challenge” to bring a tool like this to market, he said.

During his keynote, Jassy touted the intelligent search capabilities of Amazon Kendra, asserting that the technology will “totally change the value of the data” that enterprises have.

Setup of Kendra appears straightforward. Organizations will start by linking their storage accounts and providing answers to some of the questions their employees frequently query their data about. Kendra then indexes all the provided data and answers, using machine learning and NLP to attempt to understand the data’s context.

Understanding context

“We’re not just indexing the keywords inside the document here,” Jassy said.

AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019
AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019

Meanwhile, Kendra is “an interesting move especially since AWS doesn’t really have a range of SaaS application which generate a corpus of information that AI can improve for search,” McQuire said.

“But,” he continued, “this is part of a longer-term strategy where AWS has been focusing on specific business and industry applications for its AI.”

Jassy also unveiled new features for Amazon Connect, AWS’ omnichannel cloud contact center platform. With the launch of Contact Lens for Amazon Connect, users will be able to perform machine learning analytics on their customer contact center data. The platform will also enable users to automatically transcribe phone calls and intelligently search through them.

By mid-2020, Jassy said, Amazon Kendra will support real-time transcription and analysis of phone calls.

Go to Original Article

Accenture cloud tool aims to shorten decision cycle

Accenture has rolled out a tool that the company said will help customers navigate complex cloud computing options and let them simulate deployments before committing to an architecture.

The IT services firm will offer the tool, called myNav, as part of a larger consulting agreement with its customers. The myNav process starts with a discovery phase, which scans the customer’s existing infrastructure and recommends a cloud deployment approach, whether private, public, hybrid or multi-cloud. Accenture’s AI engine then churns through the company’s repository of previous cloud projects to recommend a specific enterprise architecture and cloud offering. Next, the Accenture cloud tool simulates the recommended design, allowing the client to determine its suitability.

“There’s an over-abundance of choice when the client chooses to … take applications, data and infrastructure into the cloud,” said Kishore Durg, Accenture’s cloud lead and growth and strategy lead for technology services. “The choices cause them to ponder, ‘What is the right choice?’ This [tool] will help increase their confidence in going to the cloud.”

Accenture isn’t unique among consultancies in marketing services to aid customers’ cloud adoption. But industry watchers pointed to myNav’s simulation feature as a point of differentiation.

There are many companies that offer cloud service discovery, assessment and design services for a fee, said Stephen Elliot, an analyst with IDC. “But I don’t know of any other firm that will run a simulation,” he added.

Yugal Joshi, a vice president with Everest Group, cited myNav’s cloud architecture simulator as an intriguing feature. “Going forward, I expect it to further cover custom bespoke applications in addition to COTS [commercial off-the-shelf] platforms,” he said.

Joshi, who leads Everest Group’s digital, cloud and application services research practices, said most mature IT service providers have developed some type of platform to ease clients’ journey to the cloud. “The difference lies in the vision behind the IP, the quality of the IP, articulation and the business value it can provide to clients,” he noted.

Accenture cloud simulation’s potential benefits

Elliot said myNav’s simulation is interesting because it could help customers understand the outcome of a project in advance and whether that outcome will meet their expectations.

Despite cloud being around for quite some time now, it is still not a done deal.
Yugal Joshivice president, Everest Group

This could help Accenture close deals faster while fostering more productive conversations with IT buyers, Elliot said. “In any case, customers will have to trust that the underlying information and models are correct, and that the outcomes in the solution can be trusted,” he said.

Customers, meanwhile, could benefit from faster cloud rollouts.

“Where Accenture myNav is focusing is leveraging the expertise Accenture has gathered over many cloud engagements,” Joshi said. “This can potentially shorten the decision-making, business-casing and the eventual cloud migration for clients.”

Customers can get to the results faster, rather than spend weeks or, potentially, months in assessment and roadmap exercises, he said. Whether the Accenture cloud platform delivers the anticipated results, however, will only become evident when successful client adoption case studies are available, he cautioned.

Durg said cloud assessments can take eight to 12 weeks, depending on the scale of the project. The migration phase could span two months and require 80 or more people. The simulation aspect of myNav, he noted, lets clients visualize the deployment “before a single person is put on a project.”

Help wanted

Accenture’s myNav tool arrives at a time when the cloud matured — the public cloud is more than a decade old — but not completely. The multiplicity of cloud technologies introduces uncertainty and sparks enterprise conversations around skill sets and adoption approaches.

“Despite cloud being around for quite some time now, it is still not a done deal,” Joshi said. “Clients need lot of hand-holding and comfort before they can migrate to, and then leverage, cloud as an operating platform [rather] than an alternative hosting model.”

Elliot added, “The market is at a point where every cloud deployment is almost a snowflake. It’s the organizational, skills and process discussions that slow projects down.”

Go to Original Article