Go to Original Article
Go to Original Article
Go to Original Article
With COVID-19 continuing to impact people and countries around the world, teams everywhere are moving to remote work. Earlier this week, I posted a letter from Lily Zheng, our colleague in Shanghai, detailing her team’s experience using Microsoft Teams to work from home during the outbreak. Lily’s team is one of many. Here at Microsoft in the Puget Sound, we’re encouraging our teams to work from home as much as possible, as are many organizations in this region. And we expect this trend to continue across the world. At Microsoft, our top priority is the health and safety of employees, customers, partners, and communities. By making Teams available to as many people as possible, we aim to support public health and safety by keeping teams connected while they work apart.
As we have read through your responses to Lily’s letter, it has become clear that there are two big questions on your minds. First, how can people access the free Teams offerings that Lily referenced? Second, what is our plan for avoiding service interruptions during times of increased usage? Below, you’ll find detailed answers to both. And over the next few days we’ll be sharing more tips, updates, and information related to remote work here. So check back often.
Teams is a part of Office 365. If your organization is licensed for Office 365, you already have it. But we want to make sure everyone has access to it during this time. Here are some simple ways to get Teams right away.
If you want to get started with Teams, we can get you up and running right away.
The self-service links above work great for individuals, but if you’re an IT professional who wants to roll out Teams centrally, here’s what to do.
You and your team depend on our tools to stay connected and get work done. We take that responsibility seriously, and we have a plan in place to make sure services stay up and running during impactful events like this. Our business continuity plan anticipates three types of impacts to the core aspects of the service:
We’ve recently tested service continuity during a usage spike in China. Since January 31, we’ve seen a 500 percent increase in Teams meetings, calling, and conferences there, and a 200 percent increase in Teams usage on mobile devices. Despite this usage increase, service has been fluid there throughout the outbreak. Our approach to delivering a highly available and resilient service centers on the following things.
Active/Active design: In Microsoft 365, we are driving towards having all services architected and operated in an active/active design which increases resiliency. This means that there are always multiple instances of a service running that can respond to user requests and that they are hosted in geographically dispersed datacenters. All user traffic comes in through the Microsoft Front Door service and is automatically routed to the optimally located instance of the service and around any service failures to prevent or reduce impact to our customers.
Reduce incident scope: We seek to avoid incidents in the first place, but when they do happen, we strive to limit the scope of all incidents by having multiple instances of each service partitioned off from each other. In addition, we’re continuously driving improvements in monitoring through automation, enabling faster incident detection and response.
Fault isolation: Just as the services are designed and operated in an active/active fashion and are partitioned off from each other to prevent a failure in one from affecting another, the code base of the service is developed using similar partitioning principles called fault isolation. Fault isolation measures are incremental protections made within the code base itself. These measures help prevent an issue in one area from cascading into other areas of operation. You can read more about how we do this, along with all the details of our service continuity plan, in this document.
Adjusting to remote work can be a challenge. We get it, and we are here to provide the tools, tips, and information you need to help you and your team meet that challenge. We’re inspired by the agility and ingenuity that impacted schools, hospitals, and businesses have shown throughout COVID-19, and we are committed to helping organizations everywhere stay connected and productive during this difficult time.
Q. What happens when an individual signs in with work or school credentials?
A. If the individual is licensed for Teams, they will be logged into the product. If the individual is not licensed for Teams, they will be logged into the product and automatically receive a free license of Teams that is valid through January 2021. This includes video meetings for up to 250 participants and Live Events for up to 10,000, recording and screen sharing, along with chat and collaboration. Details for IT.
Q. What does the freemium version of Teams include?
A. This version gives you unlimited chat, built-in group and one-on-one audio or video calling, 10 GB of team file storage, and 2 GB of personal file storage per user. You also get real-time collaboration with the Office apps for web, including Word, Excel, PowerPoint, and OneNote. There is no end date. Details here.
Q. Is there a user limit in the freemium version?
A. Beginning March 10, we are rolling out updates to the free version of Teams that will lift restrictions on user limits.
Q. Can I schedule meetings in the freemium version?
A. In the future, we will make it possible for users to schedule meetings. In the meantime, you can conduct impromptu video meetings and calls.
Q. How can IT admins access Teams for Education?
A. Teams has always been free to students and education professionals as a part of the Office 365 A1 offer. Access it here.
Q. Do you have any tips for working from home?
A. Lola Jacobson, one of our senior technical writers, posted a few basic tips last week. And we updated the Support remote workers using Microsoft Teams page on docs.Microsoft.com yesterday. We have more content on the way, so stay tuned.
Go to Original Article
Author: Microsoft News Center
The manufacturing outlook is optimistic for middle market manufacturers, despite concerns about a looming recession.
This is one of the findings of the BDO 2020 Manufacturing CFO Outlook Survey, which surveyed CFOs from global midmarket manufacturing companies about their market expectations, investment strategies and technology initiatives for the year ahead. The CFOs represent companies with revenues of $250 million to $3 billion, in a variety of industries.
The survey was conducted by BDO, a global tax and financial services advisory firm in Chicago with practices for several industries including manufacturing and distribution, before the coronavirus outbreak that has disrupted a wide swath of businesses and increased economic uncertainty for the year ahead.
According to the survey, more than three-fourths of respondents (77%) expect an increase in revenue for 2020, and of these, a little more than half (54%) expect revenue to grow by more than 10%. Further, three-fourths of the respondents (75%) anticipate an increase in profitability, with just under half (48%) expecting profitability to rise by 10% or more.
The optimism comes at a time of economic uncertainty and fears of an upcoming recession — even before the new coronavirus hit. According to the survey, 20% of manufacturing CFOs predict a recession will begin by the end of 2020, while 38% time it to 2021 and 47% believe it will happen after 2021. The current coronavirus epidemic likely throws a wrench in some of the survey findings, but to what degree is still an unknown.
The survey shows that variability in manufacturing market trends, said Eskander Yavar, manufacturing practice national leader at BDO USA.
Market research of a year ago would have predicted a recession to begin in 2019, but this has been pushed up at least a year, Yavar said. Trade wars and tariff policies continue to be issues that affect manufacturers’ cost lines, but their bigger concern is an impending recession.
“This industry is worried about having trade and tariff policies that are more protectionist and isolationist, because that’s just not good for manufacturers. They’d rather have a free-flowing economy,” he said. “But if you have a very robust protectionist trade tariff policy and recession hitting at the same time, that’s a big red flag for this industry so they’re trying to avoid that altogether.”
The survey was conducted before the coronavirus outbreak, so the results don’t reflect if the CFO respondents fear the epidemic has increased the likelihood of a recession.
The coronavirus is affecting all industries and the impact on manufacturing will be significant, but the fallout is too difficult to calculate right now, Yavar said. Companies are in a reactive mode and taking measures like developing business continuity plans or changing suppliers will not happen overnight.
Eskander YavarManufacturing practice national leader, BDO USA
“A lot of companies are thinking more and more just in terms of the impacts in China of trade tariffs and coronavirus on the supply chain, but it takes time and I haven’t seen many examples of the best practices to deal with the situation,” he said. “We just don’t know how big those numbers are going to get in terms of impact either, and [the process of] finding alternate suppliers can take months, not days or weeks. Companies are still evaluating whether to take that step to shift resources or develop new supplier relationships.”
Trade tensions between the U.S. and China, characterized by reciprocal tariffs, were already causing manufacturers problems before the coronavirus outbreak. The survey indicated that 21% of manufacturers experienced disruptions to supply chains due to government restrictions in 2019.
However, one reason for an optimistic manufacturing outlook in the face of economic slowdown concerns may be the increasing investment in advanced Industry 4.0 technologies.
“After a relatively sluggish period of growth in productivity over the last few decades, the convergence of multiple technologies, from cloud computing to the Internet of Things to artificial intelligence and extended reality, is ushering in a new era of productivity and reinvention — the fourth Industrial revolution, or Industry 4.0,” the report stated. “This still nascent paradigm shift is unfolding in real time and will continue to take root regardless of where we are in the economic cycle.”
The report found investing in technology or infrastructure was the top business priority for 2020. More than half of the CFOs listed digital transformation, or implementing digital technologies to modernize manufacturing and business processes and introduce new business models, as the most important manufacturing strategy for 2020 (57%). That was followed by product or service expansion (52%), geographic expansion (47%) and restructuring or reorganization (34%).
“Ten years ago, CFOs just wanted to know more, and they’re just getting themselves educated in this Industry 4.0 marketplace — IoT, all the cloud technologies, and advanced analytics,” Yavar said. “What we’re seeing in this report is that more and more are actually taking initiative and driving some type of use case to see the return on investment value.”
This is not likely to be large-scale reinvention, however, but more manageable projects that have tangible value, he said.
“They are starting to tackle specific KPIs [key performance indicators], whether it’s trying to enrich their customer experience, whether it’s improving their operations,” Yavar said. “They are starting to make this a board-level conversation and they are getting some executive initiative, so the nice thing about that is it’s inevitable.”
Go to Original Article
Last year, a team of Amish-owned horses dragged a load up a ridge near Essex, New York. It was a normal scene for rural America – straight out of a Norman Rockwell painting – except that they were bearing telecommunications equipment to connect the local community to the internet.
Essex is barely 12 miles across the lake from Burlington, Vermont, but broadband is scarce. In our increasingly digital and interconnected world, broadband is as important as electricity or water. Rural communities without broadband face higher unemployment rates and see fewer educational and economic opportunities. For the woman overseeing the horses, Beth Schiller, CEO of CvWireless LLC, this is a solvable problem. Together with Microsoft’s Airband Initiative, she’s bringing connectivity to her community.
In the summer of 2017, we launched the Microsoft Airband Initiative, which brings broadband connectivity to people living in underserved rural areas. To eliminate the rural broadband gap, we bring together private–sector capital investment in new technologies and rural broadband deployments with public–sector financial and regulatory support. We set an ambitious goal: to provide access to broadband to three million people in unserved rural areas of the United States by July 4, 2022. At two and a half years since launch, we are at the halfway point of the time we gave ourselves to meet this goal and we feel good about the steady progress we’ve made and how much we have learned. But one thing we have learned is that the problem is even bigger than we imagined.
The broadband gap is wide but solvable
Beth’s horse-borne approach to connectivity may be unique, but the problem is not: According to the Federal Communications Commission’s (FCC) 2019 broadband report, more than 21 million people in America, nearly 17 million of whom live in rural communities, don’t have access to broadband.
A recent study by BroadbandNow found that the number of unserved people is nearly double the current reported amount and more than 42 million Americans do not have access to broadband –especially in rural areas. Our own data shows that some 157.3 million people in the U.S. do not use the internet at broadband speeds. And while we are making progress and the reported number is down by six million people from last year, that’s still more than the populations of our eight biggest states – California, Texas, Florida, New York, Pennsylvania, Illinois, Ohio and Georgia – combined. More must be done.
As we’ve said from the start of the initiative, without accurate data we cannot fully understand the broadband gap. You cannot solve a problem you don’t understand. More accurate data will help deploy broadband in the places its needed. Because the government makes many funding decisions based on federal data, communities that lack broadband – but, according to FCC data, have access to broadband – have less access to resources needed to actually secure broadband connectivity. This is certainly a Catch-22, but it can be solved. We’re encouraged that the FCC has adopted new policies that should result in broadband providers reporting more accurate data and that Congress has worked on legislation to improve the FCC’s broadband data. It’s imperative that these policy changes are quickly and fully implemented so that people without broadband will get access to it.
Steady progress to close the broadband gap
But the country can’t wait on perfect data. We’re moving full steam ahead in the areas where we know we can help and making steady progress against our 3-million-person goal. We’re now in 25 states and one territory, and staging pilot programs in two additional states. We’ve already reached a total of 633,000 previously unserved people, up from 24,000 people in 2018, and as our partners’ network deployments accelerate over the coming months, we will be reaching many more.
We haven’t made this progress alone. We have made it through building partnerships throughout the United States, learning more about local solutions that will close the broadband gap. Partners such as Wisper Internet will work to bring broadband access to almost 1 million people in rural unserved areas in Arkansas, Illinois, Indiana, Kansas, Missouri and Oklahoma. In Kentucky, Illinois, Indiana and Ohio, our partner Watch Communications will bring high-speed internet access to more than 860,000 people living in unserved rural areas. Our partnerships also bring connectivity to historically underserved communities, including those residing on tribal lands. Sacred Wind Communications will help approximately 47,000 people on and off Navajo lands in New Mexico reap the benefits that come with access to the internet. Moreover, we have forged strategic partnerships with American Tower Corporation, Tilson, and Zayo Group over the last year that will further bring down the end-to-end network deployment costs for rural ISPs. We have also established a broad-based Airband ISP Program that provides ISPs in 47 states plus Washington D.C. and Puerto Rico with access to critical assets, helping them connect rural communities.
There’s good news about the cost of connectivity. The price of TV white spaces devices (TVWS) – a new connectivity technology that’s particularly useful in rural areas where laying cable simply isn’t an option – continues to drop. In the last year, the cost of customer equipment has plummeted by 50%, all while achievable speeds have increased tenfold.
At the same time, we’re pleased to see our partners in government make important, steady progress to enable these new technologies. We applaud Chairman Pai and the FCC for their vote last week to propose positive and necessary changes to TVWS regulations. Reducing red tape will enable ISPs to accelerate their progress in rural broadband deployment and help bridge the digital divide in rural America. We are also pleased that the FCC has announced plans to make up to $20 billion available in Rural Digital Opportunity funding to help ISPs bring high-speed broadband access to high-cost unserved rural areas. At the state level, we’re pleased that several state governments have created their own funding programs to support new broadband infrastructure, including Illinois, Indiana, Virginia and South Dakota.
What comes after connectivity?
As we’ve connected communities across the country, we’ve kept asking ourselves a central, key question: What comes after connectivity?
Broadband connections aren’t a panacea for all that ails rural America. Simply plugging in an ethernet cable doesn’t create jobs, increase farmers’ yields or provide a veteran with healthcare. Rural communities need resources beyond infrastructure to rebuild and lift themselves up. That’s why much of our work goes well beyond connectivity.
From education, agriculture, veterans to healthcare, we are working with local and national organizations to take the next step. For example, we are partnering with the U.S. Department of Veterans Affairs (VA) to support their telehealth initiative. We are working with Airband partners to offer discounted broadband service to veterans as well as provide vital digital skills and employability training. Our work on Airband is enabling other Microsoft efforts – such as our TechSpark program, digital skills initiatives and even environmental sustainability – to flourish in areas we’d never be able to reach otherwise.
Take for example, agriculture. The family farm is the embodiment of rural America. Unfortunately, many American farmers have struggled in recent years, whether because of policy, extreme weather events and climate change, or falling crop prices. Farmers need help, and many have turned to new technologies to compete in the global marketplace. Our FarmBeats platform is one such technology that can give farmers a real-time view of their land using ground-based sensors and “internet of things” technology to track everything from soil temperature to pH levels to moisture data. This can create a modern “Farmers’ Almanac” to chart out the farm’s future, helping farmers predict what they should plant and where, increase yields, better utilize fertilizer and irrigate more efficiently. But a farm that lacks access to high-speed internet will be left in the past, unable to use these new technologies. That’s where Airband comes in: connecting rural communities to transformative technologies.
The effort to electrify rural America in the 1930s enabled new technologies to transform those areas, empowering farms, ranches and other rural places and improving quality of life and economic opportunity. Now, nearly 90 years later, broadband can similarly provide the infrastructure to lift up rural America, but we’re losing the race against time. While our investments and those of our partners are taking seed and we are beginning to see advances, technological progress doesn’t wait. If we don’t move faster, rural America will be left further behind. We can’t let that happen.
Go to Original Article
Author: Microsoft News Center
Holistic Data Profiling, a new tool designed to give business users a complete view of their data while in the process of developing workflows, highlighted the general availability of Alteryx 2020.1 on Thursday.
Alteryx, founded in 1997 and based in Irvine, Calif., is an analytics and data management specialist, and Alteryx 2020.1 is the vendor’s first platform update in 2020. It released its most recent update, Alteryx 2019.4, in December 2019, featuring a new integration with Tableau.
The vendor revealed the platform update in a blog post; in addition to Holistic Data Profiling, it includes 10 new features and upgrades. Among them are new language toggling feature in Alteryx Designer, the vendor’s data preparation product.
“The other big highlights are more workflow efficiency features,” said Ashley Kramer, Alteryx’s senior vice president of product management. “And the fact that Designer now ships with eight languages that can quickly be toggled without a reinstall is huge for our international customers.”
Holistic Data Profiling is a low-code/no-code feature that gives business users an instantaneous view of their data to help them better understand their information during the data preparation process — without having to consult a data scientist.
After dragging a Browse Tool — Alteryx’s means of displaying data from a connected tool as well as data profile information, maps, reporting snippets and behavior analysis information — onto Alteryx’s canvas, Holistic Data Profiling provides an immediate overview of the data.
Holistic Data Profiling is aimed to help business users understand data quality and how various columns of data may be related to one another, spot trends, and compare one data profile to another as they curate their data.
Users can zoom in on a certain column of data to gain deeper understanding, with Holistic Data Profiling providing profile charts and statistics about the data such as the type, quality, size and number of records.
That knowledge will subsequently inform how to proceed to the next move in order to ultimately make a data-driven decision.
Mike LeoneAnalyst, Enterprise Strategy Group
“It’s easy to get tunnel vision when analyzing data,” said Mike Leone, analyst at Enterprise Strategy Group. “Holistic Data Profiling enables end users — via low-code/no-code tooling — to quickly gain a comprehensive understanding of the current data estate. The exciting part, in my opinion, is the speed at which end users can potentially ramp up an analytics project.”
Similarly, Kramer noted the importance of being able to more fully understand data before the final stage of analysis.
“It is really important for our customers to see and understand the landscape of their data and how it is changing every step of the way in the analytic process,” she said.
Alteryx customers were previously able to view their data at any point — on a column-by-column or defined multi-column basis — but not to get a complete view, Kramer added.
“Experiencing a 360-degree view of your data with Holistic Data Profiling is a brand-new feature,” she said.
In addition to Holistic Data Profiling, the new language toggle is perhaps the other signature feature of the Alteryx platform update.
Using Alteryx Designer, customers can now switch between eight languages to collaborate using their preferred language.
Alteryx previously supported multiple languages, but for users to work in their preferred language, each individual user had to install Designer in that language. With the updated version of Designer, they can click on a new globe icon in their menu bar and select the language of their choice to do analysis.
“To truly enable enterprise-wide collaboration, breaking down language barriers is essential,” Leone said. “And with Alteryx serving customers in 80 different countries, adding robust language support further cements Alteryx as a continued leader in the data management space.”
Among the other new features and upgrades included in Alteryx 2020.1 are a new Power BI on-premises loader that will give users information about Power BI reports and automatically load those details into their data catalog in Alteryx Connect; the ability to input selected rows and columns from an Excel spreadsheet; and new Virtual Folder Connect to save custom queries.
Meanwhile, a streamlined loader of big data from Alteryx to the Snowflake cloud data warehouse is now in beta testing.
“This release and every 2020 release will have a balance of improving our platform … and fast-forwarding more innovation baked in to help propel their efforts to build a culture of analytics,” Kramer said.
Go to Original Article
Nvidia plans to acquire object storage vendor SwiftStack to help its customers accelerate their artificial intelligence, high-performance computing and data analytics workloads.
The GPU vendor, based in Santa Clara, Calif., will not sell SwiftStack software but will use SwiftStack’s 1space as part of its internal artificial intelligence (AI) stack. It will also enable customers to use the SwiftStack software as part of their AI stacks, according to Nvidia’s head of enterprise computing, Manuvir Das.
SwiftStack and Nvidia disclosed the acquisition today. They did not reveal the purchase price, but they said it expects the deal to close with weeks.
Nvidia worked with San Francisco-based SwiftStack for more than 18 months on tackling the data challenges associated with running AI applications at a massive scale. Nvidia found 1space particularly helpful. SwiftStack introduced 1space in 2018 to accelerate data access across public and private clouds through a single object namespace.
“Simply put, it’s a way of placing the right data in the right place at the right time, so that when the GPU is busy, the data can be sent to it quickly,” Das said.
Das said Nvidia customers would be able to use enterprise storage from any vendor. The SwiftStack 1space technology will form the “storage orchestration layer” that sits between the compute and the storage to properly place the data so the AI stack runs optimally, Das said.
“We are not a storage vendor. We do not intend to be a storage vendor. We’re not in the business of selling storage in any form,” Das said. “We work very closely with our storage partners. This acquisition is designed to further the integration between different storage technologies and the work we do for AI.”
Manuvir DasHead of enterprise computing, Nvidia
Nvidia partners with storage vendors such as Pure Storage, NetApp, Dell EMC and IBM. The storage vendors integrate Nvidia GPUs into their arrays or sell the GPUs along with their storage in reference architectures.
Das said Nvidia found SwiftStack attractive because its software is based on open source technology. SwiftStack’s eponymous object- and file-based storage and data management software is rooted in open source OpenStack Swift. Das said Nvidia plans to continue to work with the SwiftStack team to advance and optimize the technology and make it available through open source avenues.
“The SwiftStack team is part of Nvidia now,” he said. “They’re super talented. So, the innovation will continue to happen, and all that innovation will be upstreamed into the open source SwiftStack. It will be available to anybody.”
SwiftStack laid off an undisclosed number of sales and marketing employees in late 2019, but kept the engineering and support team intact, according to president Joe Arnold. He attributed the layoffs to a shift in sales focus from classic backup and archiving to AI, machine learning and data analytics use cases.
The SwiftStack 7.0 software update that emerged late last year took aim at analytics HPC, AI and ML use cases, such as autonomous vehicle applications that feed data to GPU-based servers. SwiftStack said at the time that it had worked with customers to design clusters that could scale to handle multiple petabytes of data and support throughput in excess of 100 GB per second.
Das said Nvidia has been using SwiftStack’s object storage technology as well as 1space. He said Nvidia’s internal work on data science and AI applications had quickly showed the company that accelerating the computer shifts the bottleneck elsewhere, to the storage. That played a factor in Nvidia’s acquisition of SwiftStack, he noted.
“We recognized a long time ago that the way to help the customers is not just to provide them a GPU or a library, but to help them create the entire stack, all the way from the GPU up to the applications themselves. If you look at Nvidia now, we spend most of our energy on the software for different kinds of AI applications,” Das said.
He said Nvidia would fully support SwiftStack’s customer base. SwiftStack claims it has around 125 customers. It products lineup included SwiftStack’s object storage software, ProxyFS file system for integrated file and object API access, and 1space. SwiftStack’s software is designed to run on commodity hardware on premises, and its 1space technology can run in the public cloud.
SwiftStack spent more than eight years expanding its software’s capabilities since the company’s 2011 founding. Das said Nvidia has no reason to sell the SwiftStack’s proprietary software because it does not compete head-to-head with other object storage providers.
“Our philosophy here at Nvidia is we are not trying to compete with infrastructure vendors by selling some kind of a stack that competes with other peoples’ stacks,” Das said. “Our goal is simply to make people successful with AI. We think, if that happens, everybody wins, including Nvidia, because we believe GPUs are the best platform for AI.”
Go to Original Article
An upcoming price change to Google Kubernetes Engine isn’t sitting well with some users of the managed container service, but analysts said Google’s move is well within reason.
As of June 6, Google will charge customers a cluster management fee of $0.10 per hour, regardless of the cluster’s size or topology. Each customer billing account will receive one free zonal cluster, and the new management fee doesn’t apply to clusters run as part of Anthos, Google’s cross-platform container orchestration service.
Along with the management fee, however, Google is also introducing a service-level agreement (SLA) for Google Kubernetes Engine (GKE). It promises 99.95% availability for regional clusters and 99.5% on zonal clusters, assuming they use a version from Google’s stable release channel, according to the price change announcement.
Google’s decision did not sit well with some users, who voiced complaints on social media.
Having slept on it, the changes to #gke makes me furious. This is maybe the worst decision Google has made as a company, and I cannot overstate this enough, that is saying a lot. This is a master class on how to trade trust for money, badly. The justifications are just not there
— CodeByKyle (@CodeByKyle) March 5, 2020
the next few days will see a massive migration of “playground” Kubernetes clusters away from GKE… they just introduced a $0.1/hr fee, that’s $72/month!
So now running a managed master node on EKS and GKE cost the same GKE greatly improved their SLA though
Bye bye GKE pic.twitter.com/4KCUzwNHSF
— Maël Valais (@maelvls) March 4, 2020
Others disagreed. The planned fee for Google Kubernetes Engine is reasonable, said Gary Chen, an analyst at IDC. “The fact is that Kubernetes control plane management is getting more complex and Google is constantly improving it, so there is a lot of value-add there,” he said. “Plus, as more critical workloads get deployed on containers, service levels become important and that will take some effort and investment, so it’s not unreasonable to ask for a fee for enterprise-level SLAs.”
A longer-term solution for Google could be to offer a lower-cost or free tier for those who don’t need all the features or the SLA, Chen added. “I think we’ll definitely see that in cloud container pricing in the future,” he said. “More tiers, feature add-ons, etc., to satisfy all segments of the market.”
Google previously had a management fee of $0.15 per hour for large clusters but dropped it in November 2017. The price addition coming in June will bring GKE into parity with Amazon Elastic Kubernetes Service; AWS cut the cluster management fee for EKS to $0.10 per hour in January, down from $0.20 per hour.
Although measured in pennies per hour, the cluster management fees amount to about $72 a month per cluster, a sum that can add up fast in larger implementations. The question Google Cloud customers must weigh now is whether the fee is worth it compared to other options.
Microsoft Azure Kubernetes Service is one, as it doesn’t currently carry any cluster management fees. But customers would have to do a close comparison of what Azure charges for the compute resources supporting managed containers, as opposed to Google, AWS and other providers.
Another alternative would be to self-manage clusters, but that would require analysis of whether doing so would be desirable in terms of staff time and training.
Above all, Google would undoubtedly like more adoption of Anthos, which became generally available in April 2019. The platform encompasses a software stack much broader than GKE and is priced accordingly. Anthos is the company’s primary focus in its bid to gain market share against Azure and AWS and represents Google Cloud CEO Thomas Kurian’s intent to get more large enterprise customers aboard.
“The cloud wars are intense and revenue matters,” said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. The cluster management pricing could be viewed as some “gentle pressure” on customers to adopt Anthos, he added.
Go to Original Article
The inability to harness the power of data and turn it into fuel for growth hampers the success of many SMBs.
Unlike large enterprises with massive budgets, SMBs are often unable to employ data scientists to build and maintain analytics operations and interpret data to make fully informed decisions. Instead of investing in small business analytics strategies, they rely on instinct and experience, neither of which is foolproof.
Onepath, an IT services provider based in Kennesaw, Ga., sought to quantify the struggles of the SMBs it serves. It surveyed more than 100 managers and executives of organizations ranging in size from 100 to 500 employees to gauge their experience with analytics, and on Thursday released a report entitled “Onepath 2020 Trends in SMB Data Analytics Report.”
Key discoveries included that despite dedicating time and money to analytics, 86% felt they weren’t able to fully harness the power of data, 59% believed analytic capabilities would help them go to market faster and 54% felt that they risked making poor business decisions without the benefits of data analysis.
Phil Moore, Onepath’s director of applications management services, spoke with SearchBusinessAnalytics about the report as well as the difficulties involved in small business analytics efforts.
In Part I of this two-part Q&A, he discussed the findings of the report in detail. Here he talks about the perils SMBs face if they don’t develop a data-driven decision-making process.
As the technology in business intelligence platforms gets better and better, will SMBs be able to improve data utilization as well as large enterprises?
Phil Moore: The Fortune 500s of the world have deep pockets and can hire their army of IT guys and go after it, but the small and medium-sized businesses tend to have far less volume of data unless they are in the unique position where they are a high-data business. But the core [of the SMB market] is around legal, construction, health care, doctor’s offices, and their data doesn’t get to the volume of larger organizations. They’re just looking for the metrics that help them run their business more efficiently, help them service their clients.
If you go to the other bookend and see an Amazon, of course they’re on a grand scale in terms of the size of their business. And they’re using analytics all up and down throughout their business, whether it be shipping, fulfillment, robotics, managing their warehouses. The SMB market won’t have the same types of complexities that the big guys have. The market is different.
Are there SMBs who are able to harness the power of data?
Phil MooreDirector of applications management services, Onepath
Moore: The survey shows that 86% of the companies that are taking a swing at analytics — that have some solution — say they’re underachieving, and they could be getting more out of their data. That leaves 14% that are delighted with what they’re getting. There are always leading guys, the cutting edge, the folks that are more technology-centric or that appreciate and understand the value of technology and how it can help the business. Those guys are going to lead the way.
What will happen to companies that don’t figure out a way to use data, and is there a timetable for when they need to get with it?
Moore: If you break down the SMB market into the different disciplines — health care, legal, construction — the folks that get and use analytics, their first benefit over their competitors is a better line of sight to their business. They’re going to be able to make crisper decisions, which lead to either faster delivery of something to the market or better customer service, which indirectly will lead to higher profits. Right away they get a competitive advantage over their competitors that aren’t using analytics, that are running their business by shooting from the hip — which is running it with their intuition and their knowledge and their experience. That knowledge and experience may get proven wrong with data, because the eye in the sky doesn’t lie. At some point, things get revealed in the data that lead to transforming business decisions.
For example, in the IT space, one of the transforming business decisions is how to go to market, changing from charging by the hour for every hour worked when a ticket is opened to offering a fixed-price, all-you-can-eat model. The data shows a fixed price will still be profitable if they optimize internal processes. So, IT companies are shifting, and the companies that are now going to market with a fixed-price, all-you-can-eat support model are crushing the guys that are still out there charging by the hour. The guys charging by the hour either have to transform or die. Those transformations that get driven by the data will happen in an industry-vertical way.
Is it critical small business analytics expenditures to be part of the budget right off the bat?
Moore: Yes, but the challenge we see is that they know they want to have analytics but they don’t know how to budget for it. Therefore, it becomes unaffordable. One of the things we’re trying to do is make it affordable so people can bridge the mental gap from wanting analytics but not being able to get it by offering a monthly, low-entry, very affordable template set of [key performance indicators], so once they see the value they know how to put a dollar figure on the value and then adjust their budget for the next year. If you go to a small business and tell them they need analytics and need to budget for it, they struggle with how much to budget. They put a line item in the budget but they don’t know what they’re getting, so it often winds up getting cut from the budget.
Editor’s note: This Q&A has been edited for brevity and clarity.
Go to Original Article
While analytics have become a staple of large enterprises, many small and medium-sized businesses struggle to utilize data for growth.
Large corporations can afford to hire teams of data scientists and provide business intelligence software to employees throughout their organizations. While many SMBs collect data that could lead to better decision-making and growth, data utilization is a challenge when there isn’t enough cash in the IT budget to invest in the right people and tools.
Sensing that SMBs struggle to use data, Onepath, an IT services vendor based in Kennesaw, Ga., conducted a survey of more than 100 businesses with 100 to 500 employees to gauge their analytics capabilities for the “Onepath 2020 Trends in SMB Data Analytics Report.”
Among the most glaring discoveries, the survey revealed that 86% of the companies that invested in personnel and analytics surveyed felt they weren’t able to fully exploit their data.
Phil Moore, Onepath’s director of applications management services, recently discussed both the findings of the survey and the challenges SMBs face when trying to incorporate analytics into their decision-making process.
In Part II of this Q&A, he talks about what failure to utilize data could ultimately mean for SMBs.
What was Onepath’s motivation for conducting the survey about SMBs and their data utilization efforts?
Phil Moore: For me, the key finding was that we had a premise, a hypothesis, and this survey helped us validate our thesis. Our thesis is that analytics has always been a deep pockets game — people want it, but it’s out of reach financially. That’s talking about the proverbial $50,000 to $200,000 analytics project… Our goal and our mission is to bring that analytics down to the SMB market. We just had to prove our thesis, and this survey proves that thesis.
It tells us that clients want it — they know about analytics and they want it.
What were some of the key findings of the survey?
Moore: Fifty-nine percent said that if they don’t have analytics, it’s going to take them longer to go to market. Fifty-six percent said it will take them longer to service their clients without analytics capabilities. Fifty-four percent, a little over half, said if they didn’t have analytics, or when they don’t have analytics, they run the risk of making a harmful business decision.
Phil MooreDirector of applications management services, Onepath
That tells us people want it… We have people trying analytics — 67% are spending $10,000 a year or more, and 75% spent at least 132 hours of labor maintaining their systems — but they’re not getting what they need. A full 86 % said they’re underachieving when they’re taking a swing with their analytics solution.
What are the key resources these businesses lack in order to fully utilize data? Is it strictly financial or are there other things as well?
Moore: We weren’t surprised, but what we hadn’t thought about is that the SMB market just doesn’t have the in-house skills. One in five said they just don’t have the people in the company to create the systems.
Might new technologies help SMBs eventually exploit data to its full extent?
Moore: The technologies have emerged and have matured, and one of the biggest things in the technology arena that helps bring the price down, or make it more available, is simply moving to the cloud. An on-premises analytics solution requires hardware, and it’s just an expensive footprint to get off the ground. But with Microsoft and their Azure Cloud and their Office 365, or their Azure Synapse Analytics offering, people can actually get to the technology at a far cheaper price point.
That one technology right there makes it far more affordable for the SMB market.
What about things like low-code/no-code platforms, natural language query, embedded analytics — will those play a role in helping SMBs improve data utilization for growth?
Moore: In the SMB market, they’re aware of things like machine learning, but they’re closer to the core blocking and tackling of looking at [key performance indicators], looking at cash dashboards so they know how much cash they have in the bank, looking at their service dashboard and finding the clients they’re ignoring.
The first and easiest one that’s going to apply to SMBs is low-code/no-code, particularly in grabbing their source data, transforming it and making it available for analytics. Prior to low-code/no-code, it’s really a high-code alternative, and that’s where it takes an army of programmers and all they’re doing is moving data — the data pipeline.
But there will be a set of the SMB market that goes after some of the other technologies like machine learning — we’ve seen some people be really excited about it. One example was looking at [IT help] tickets that are being worked in the service industry and comparing it with customer satisfaction. What they were measuring was ticket staleness, how many tickets their service team were ignoring, and as they were getting stale, their clients would be getting angry for lack of service. With machine learning, they were able to find that if they ignored a printer ticket for two weeks, that is far different than ignoring an email problem for two weeks. Ignoring an email problem for two days leads to a horrible customer satisfaction score. Machine learning goes in and relates that stuff, and that’s very powerful. The small and medium-sized business market will get there, but they’re starting at earlier and more basic steps.
Editor’s note: This Q&A has been edited for brevity and clarity.
Go to Original Article
Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.
BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.
“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”
The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.
At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.
Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.
“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”
While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.
That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.
Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.
Tamir AmramOperations group lead, BioCatch
In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.
The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.
“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”
The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.
The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.
“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”
BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.
Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.
“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”
Go to Original Article