Tag Archives: VEGAS

Las Vegas shores up SecOps with multi-factor authentication

The city of Las Vegas used AI-driven infrastructure security tools to stop an attacker in January before sensitive IT systems were accessed, but the city’s leadership bets future attempts won’t even get that far.

“Between CrowdStrike [endpoint security] and Darktrace [threat detection], both tools did exactly what they were supposed to do,” said Michael Sherwood, chief innovation officer for Las Vegas. “We had [a user] account compromised, and that allowed someone to gain short-term access to our systems.”

The city’s IT staff thwarted that attacker almost immediately in the early morning of Jan. 7. IT pros took measures to keep the attacker from accessing any of the city’s data once security monitoring tools alerted them to the intrusion.

The city has also used Okta access management tools for the last two years to consolidate user identity and authentication for its internal employees and automate access to applications through a self-service portal. Next, it will reinforce that process with multi-factor authentication using the same set of tools, in the hopes further cyberattacks will be stopped well outside its IT infrastructure.

Multi-factor security will couple a physical device — such as an employee badge or a USB key issued by the city — with usernames and passwords. This will reduce the likelihood that such an account compromise will happen again, Sherwood said. Having access management and user-level SecOps centralized within Okta has been key for the city to expand its security measures quickly based on what it learned from this breach. By mid-February, its IT team was able to test different types of multi-factor authentication systems and planned to roll one out within 60 days of the security incident.

Michael SherwoodMichael Sherwood

“With dual-factor authentication, you can’t just have a user ID and password — something you know,” Sherwood said. “A bad actor might know a user ID and password, but now they have to [physically] have something as well.”

SecOps automation a shrewd gamble for Las Vegas

Las Vegas initially rolled out Okta in 2018 to improve the efficiency of its IT help desk. Sherwood estimated the access management system cut down on help desk calls relating to forgotten passwords and password resets by 25%. The help desk also no longer had to manually install new applications for users because of an internal web portal connected to Okta that automatically manages authorization and permissions for self-service downloads. That freed up help desk employees for more strategic SecOps work, which now includes the multi-factor authentication rollout.

Another SecOps update slated for this year will add city employees’ mobile devices to the Okta identity management system, and an Okta single sign-on service for Las Vegas citizens that use the city’s web portal.

Residents will get one login for all services under this plan, Sherwood said. “If they get a parking citation and they’re used to paying their sewer bill, it’s the same login, and they can pay them both through a shopping cart.”

With dual-factor authentication, you can’t just have a user ID and password — something you know. A bad actor might know a user ID and password, but now they have to [physically] have something as well.
Michael SherwoodChief innovation officer, city of Las Vegas

Okta replaced a hodgepodge of different access management systems the city used previously, usually built into individual IT systems. When Las Vegas evaluated centralized access management tools two years ago, Okta was the only vendor in the group that was completely cloud-hosted, Sherwood said. This was a selling point for the city, since it minimized the operational overhead to set up and run the system.

Okta’s service competes with the likes of Microsoft Active Directory, OneLogin and Auth0. Las Vegas also uses Active Directory for access management in its back-end IT infrastructure, while Okta serves the customer and employee side of the organization.

“There is still separation between certain things, even though one product may well be capable of [handling] both,” he said.

Ultimately, the city would like to institute a centralized online payment system for citizens to go along with website single sign-on, and Sherwood said he’d like to see Okta offer that feature and electronic signatures as well.

“They’d have lot of opportunity there,” he said. “We can do payments and electronic signatures with different providers, but it would be great having that more integrated into the authentication process.”

An Okta representative said the company doesn’t have plans to support payment credentials at this time but that the company welcomes customer feedback.

Go to Original Article
Author:

CES 2020: PC Gaming Experiences Designed for Everyone – Xbox Wire

This year’s Consumer Electronics Show (CES) in Las Vegas kicked off 2020 with a look at what’s in store for a variety of players this year, with exciting innovations for PC gaming and Microsoft’s device partners announcing some of the best upcoming hardware and software in the industry.

From the thinnest and lightest gaming laptops yet, to immersive monitors giving players a deeper, more robust experience, in addition to new gaming desktops and graphics cards, there’s plenty for PC gamers to be excited for in the year ahead.

To catch you up on all the news from last week, we’ve wrapped up all the CES 2020 announcements from Acer, Asus, Dell, Lenovo, and iBuyPower below.

Acer

Acer introduced new Predator monitors offering gamers a more immersive and expansive view of their play.

  • The Predator CG552K features a huge 55-inch 4K OLED panel that’s Adaptive Sync and HDMI VRR (Variable Refresh Rate) compatible, making it ideal for hardcore PC and console gamers wanting a higher vantage point. The 37.5-inch monitor increases gaming immersion with a 2300R curved UWQHD+ panel and Vesa DisplayHDR 400 certification that makes colors pop.
  • The 32-inch Predator X32 gaming monitor reproduces brilliant visuals with Nvidia G-Sync Ultimate, Vesa Display HDR 1400 certification and 89.5% Rec. 2020, perfect for gamers who also create their own videos.

Asus

Asus released new Strix gaming desktops, the Zephyrus G14 laptop and TUF laptops presenting device options for every type of gamer.

  • Asus Republic of Gamers (ROG) debuted a handful of new Strix models: Strix GA35 and GT35 gaming desktops to get players tournament-ready for competitive esports. They’re engineered to sustain smooth gameplay under serious pressure and offer the flexibility to do everything from producing top-quality streams to developing games. In addition to those new gaming devices, Asus ROG also announced new Strix GA15 and GT15 gaming desktops that focus on gaming fundamentals for competitive esports players on a budget. Lean and lightweight, these leverage powerful, latest generation processors to capably handle hardcore gaming, streaming and multitasking. These use the latest 3rd Generation AMD Ryzen CPUs and upcoming 10th Generation Intel Core processors.
  • The Zephyrus G14 brings premium innovations to a wider audience with an ultra-slim form factor at just 17.9mm thin and 1.6kg, all without compromising performance. The Zephyrus G14 gaming notebook features RTX graphics for high frame rates when playing popular games, and also lets gamers choose between high refresh or high resolution for their display; the choice of 120Hz refresh rate or WQHD resolution panels allows users to optimize for gaming or creating content. G14 has an optional AniMe Matrix display that deepens personalization, enabling users to show custom graphics, animations and other effects across more than a thousand mini LEDs embedded in the lid.
  • The 15-inch TUF Gaming A15 and F15, along with their 17-inch A17 and F17 siblings, deliver an unprecedented experience for the price. Key to the experience is potent processing power, thanks to a choice between 4th Gen AMD Ryzen Mobile CPUs and upcoming 10th Gen Intel Core processors. Nvidia Turing-based GPUs up to the GeForce RTX 2060 feed frames into fast displays that refresh at up to 144Hz and use AMD FreeSync technology to ensure smoother, tear-free gaming across a wide range of titles.

Dell

Dell announced the new Alienware gaming monitor and a redesigned Dell G5 15 SE laptop with new features and enhanced performance.

  • Built for speed with a 99% sRGB color coverage, the new Alienware 25 Gaming Monitor features fast IPS technology that offers rich colors, a 240Hz refresh rate and a 1 millisecond response time, all in native FHD resolution. It also has AMD Radeon FreeSync and is G-Sync compatible.
  • The newly redesigned Dell G5 15 SE (Special Edition) is the first Dell G Series laptop to feature 3rd Gen AMD Ryzen 4000 H-Series Mobile Processors (up to 8 cores and 16 threads) paired with the latest AMD Radeon RX 5000M Series graphics. The two chips work seamlessly together using AMD SmartShift technology to optimize performance by automatically shifting power as needed between the Ryzen processor and Radeon graphics, giving gamers precisely what they want at each moment of play.

Lenovo

Lenovo released a number of new performance monitors and laptops, giving gamers a variety of devices to choose how they want to enhance their battle experience.

  • With the new premium Lenovo Q27h Monitor, users can seamlessly switch between entertainment and their latest creative project. Its 27-inch QHD (2560 x 1440) provides IPS high-resolution ​and 350 nits of brightness. The four-sided near-edgeless bezel brings a noticeably wider viewing experience when playing the hottest gaming titles in your spare time with super-fast 4ms response time, and a smooth 75Hz refresh rate to reduce motion blur in the game.
  • The Lenovo Legion Y740S is Lenovo’s thinnest and lightest gaming laptop yet with up to eight hours of battery life. It’s got up to 10th Gen Intel Core i9 processors (coming soon) reaching more than 5 GHz and Q-Control, with which users can shift gears with a simple press of their Fn+Q keys. Jump into Performance Mode for higher frame rates, down-shift into Quiet Mode for better battery life to watch a movie or stay the course in Balance Mode for day-to-day usage. Made with long-term gaming usage in mind, enjoy the new tactile feel of the Lenovo Legion keyboards, featuring quick response time with 100% anti-ghosting, improved ergonomic key size and responsive switches designed for smoother typing and gameplay.
  • Stay focused on the game with the new Lenovo Legion Y25-25 Gaming Monitor with a 24.5-inch, Full HD IPS panel display built into the near-edgeless chassis. Crank up refresh rates all the way to 240Hz—more FPS means that more data flows between the GPU and monitor, helping to eliminate tearing in most multiplayer games. It comes with anti-glare panel and up to 400 nits of brightness and is TÜV Rheinland Eye Comfort Certified to reduce eye strain. Curved monitors make gaming more immersive and comfortable, as the curve simulates a more natural viewing experience for your eyes, neck and head—allowing the gamer to see all the action at once.
  • The new 31.5-inch Lenovo G32qc Gaming Monitor has near-edgeless bezel QHD (2560 x 1440) screen resolution for clear visuals and superior picture quality. Catch every player movement with its wide viewing angle, high-screen brightness and excellent contrast ratio.
  • Or, choose the heavy-duty yet compact 27-inch Full HD (1920 x 1080) resolution display on the Lenovo G27c Gaming Monitor — both monitors have a curvature of 1500R for complete game immersion. The latter is engineered to deliver virtually tear-free and stutter-free gameplay and is capable of an amazingly high refresh rate of up to 165Hz, helping to rid gaming distractions such as choppy images, streaks and motion blur.

iBuyPower

 iBuyPower showed off an expansion of its Element Case line and next gen Revolt Series.

  • For a different take on the traditional PC layout, the Element Dual features a binary chamber design. With the PSU mounted vertically on the bottom right side of the case and hidden behind the motherboard tray, users will be left with an open aesthetic on the left side and substantial space for maximum component compatibility. The Element CL case is pre-built systems is designed with an integrated front panel distribution plate for easier bends and less complicated routing.
  • The Revolt GT3 will take on a new aesthetic compared to the asymmetrical design of its predecessors, housing small form factor systems and providing support for ITX motherboards and SFX power supplies up to 750W. Systems constructed in it will be mounted to and suspended inside an outer frame by flexible rubber supports designed to add both cushion from shock and vibration damping.

These are just some of the new products that are bringing powerful experiences to Windows 10 gamers in 2020. Check back on Xbox Wire or the Windows Experience blog to keep up with the latest PC gaming product releases and news.

Go to Original Article
Author: Microsoft News Center

Put infrastructure automation at the heart of modern IT ops

LAS VEGAS — Successful IT leaders know how to navigate change. In 2020 and beyond, that skill will almost certainly be put to the test.

To remain relevant and competitive in the years to come, enterprises need to embrace top IT trends around infrastructure automation, hybrid management tools and DevOps — despite the growing pains they’ll inevitably face along the way. How to do just that was the subject of discussions at Gartner’s IT Infrastructure, Operations & Cloud Strategies Conference here this month.

Automate, automate, automate

In search of greater agility, and faced with demands to do more with less, enterprises should increasingly lean on automation, from a business process standpoint and in IT infrastructure. 

“Automation is to modern infrastructure what blood is to the body,” said Dennis Smith, research vice president at Gartner, at the event. You simply can’t run one without the other, particularly as IT teams manage complex and heterogenous infrastructure setups.

At this point, the benefits of IT automation are well-established: a reduced risk of human error and an IT staff that has the time to work on more strategic, higher-level, involved tasks. But making the shift from manual to automated IT management practices in configuration management, capacity planning and other critical tasks can be tricky. And it won’t be easy to staff up with the requisite skill sets.

That’s a challenge for Keith Hoffman, manager of the data network at Sharp HealthCare, based in San Diego, Calif. “My team is made up of legacy network engineers, not programmers,” Hoffman said. “But we’ve talked about [the need for automation], and we see the writing on the wall.”

Hoffman’s team currently has some ad hoc instances of IT infrastructure automation via scripts, but the goal is to scale those efforts, particularly for automated configuration management. They’ve taken training courses on select automation and orchestration tools and turned to Sharp HealthCare’s internal DevOps team to learn and apply automation best practices.

To help close skill gaps, and to encourage broader IT automation efforts, enterprises should appoint a dedicated automation architect, said Ross Winser, senior director analyst at Gartner. The automation architect should help IT navigate the ways it can achieve infrastructure automation — infrastructure as code, site reliability engineering, a move away from traditional scripts in favor of AIOps practices — as well as the vast ecosystem of tools and vendors that support those efforts.

Hybrid IT management

Hybrid environments are a perennial IT trend. As these setups become even more complex, IT ops leaders need to further refine their infrastructure management practices.

Increasingly, one enterprise IT workload can span multiple infrastructure environments: dedicated data centers, managed hosting and colocation facilities, the public cloud and edge computing locations. While these hybrid environments offer deployment flexibility, they also complicate troubleshooting, incident response and other core IT operations tasks. When a failure occurs, the complex string of system connections and dependencies makes it difficult to pinpoint a cause.

“The ability to actually get to an answer is getting harder and harder,” Winser said.

Many operations teams still rely on disparate tool sets to monitor and manage resources in hybrid IT, adding to this complexity.  

“I think the challenge is having a common tool set,” said Kaushal Shah, director of infrastructure and information security at Socan, a performance rights organization for the music industry, based in Toronto. “What we do internally at the infrastructure layer is a lot of scripting and CLI-driven configurations, whereas in the cloud, every cloud provider has their own CLI.”

Shah — whose team runs IT infrastructure on premises and in AWS and Microsoft Azure — said he’s evaluating infrastructure-as-code tools like HashiCorp Terraform and Red Hat Ansible to “level the playing field” and provide a centralized way to manage resource configurations. Adoption poses a learning curve.

“[My team members] have a certain background and I think, for these tools, it’s a different mindset: You have to describe the state of the asset as opposed to how to configure it,” Shah said.

Gartner notes that many of these integrated management tool sets are still in the early days of truly centralized, end-to-end hybrid management capabilities. Infrastructure and operations teams should also use workflow visualization and dependency mapping, and create internal centers of excellence, to combat hybrid IT challenges.

Scalable DevOps

For many IT shops, DevOps implementation is a significant priority. For those that have a DevOps foundation in place, the next big challenge is to ensure it can scale.

It’s common practice for enterprises to get started with DevOps via pilot or incubation programs. And while there’s nothing wrong with this approach, it could impede IT leaders’ ability to enforce DevOps as a broad, cross-functional practice throughout the organization. A DevOps team, or even multiple DevOps teams, can sprout up and work as silos within the organization — perpetuating the very barriers DevOps sought to break down in the first place.

Shared self-service platforms are one way to ensure a DevOps practice can scale, remain responsive and span different functional groups, Winser said. Think of them as a digital toolbox from which DevOps teams, including infrastructure and operations practitioners, can access and explore tools relevant to their role. A shared tool set promotes consistency and common best practices throughout the organization.

In addition, site reliability engineers — who work with application developers to ensure highly reliable systems — can promote scalability within DevOps shops. Automation also plays a role.  

Socan’s Shah sees DevOps as an option to transition his team out of “fire-fighting mode” and into a position where they work hand in hand with developers and business stakeholders. DevOps, in combination with streamlined hybrid IT management and infrastructure automation, could bring the team out of a technology-focused mindset and onto a platform-ops approach, where developers interact with the infrastructure via API-driven interfaces as opposed to sending IT tickets to build things.

Go to Original Article
Author:

Amazon tech VP lays out ambitious AWS storage vision

LAS VEGAS –There appears to be no end in sight to the ambitious vision of AWS storage, especially when it comes to file systems.

During an interview with TechTarget, Amazon VP of technology Bill Vass said AWS aims to “enable every customer to be able to move to the cloud.” For example, Amazon could offer any of the approximately 35 file systems that its enterprise customers use, under the FSx product name, based on customer demand, Vass said. FSx stands for File System x, where the “x” can be any file system. AWS launched the first two FSx options, for Lustre and Windows file systems, at its November 2018 Re:Invent conference.

(Editor’s note: Vass said during the original interview that AWS will offer all 35 file systems over time. After the article published, Vass contacted us via email to clarify his statement. He wrote: “FSx is built to offer any type of file system from any vendor. I don’t want it to seem that we have committed to all 35, just that we can if customers want it.”)

AWS cannot support nearly three dozen file systems overnight, but Vass highlighted a new storage feature coming in 2020: a central storage management console similar to the AWS Backup option that unifies backups.

Vass has decision-making oversight over all AWS storage products (except Elastic Block Storage), as well as quantum computing, IoT, robotics, CloudFormation, CloudWatch monitoring, system management, and software-defined infrastructure. Vass has held CEO, COO, CIO, CISO and CTO positions for startups and Fortune 100 companies, as well as the federal government. Before joining Amazon more than five years ago, Vass was president and CEO of Liquid Robotics, which designs and builds autonomous robots for the energy, shipping, defense, communications, scientific, intelligence and environmental industries.

How has the vision for AWS storage changed since the object-based Simple Storage Service (S3) launched in 2006?

Amazon storage
Amazon VP of technology Bill Vass with AWS Snowball appliance.

Bill Vass: Originally, it was very much focused on startups, developers and what we call webscale or web-facing storage. That’s what S3 was all about. Then as we grew in the governments and enterprises, we added things like [write once read many] WORM, [recovery point objective] RPO for cross-region replication, lifecycle management, intelligent tiering, deep archive. We were the first to have high-performance, multi-[availability zone] AZ file systems. Block storage has continued to be a mainstay for databases and things like that. We launched the first high-performance file system that will rival anything on prem with FSx for [high-performance computing] HPC. So, we ran Lustre in there. And Lustre gives you microsecond latency, 100 gigabits per thread, connected directly to your CPU.

The other thing we did at Re:Invent [2018] was the FSx for SMB NTFS Windows. At Re:Invent this year, we launched the ability to replicate that to one, two or three AZs. They added a bunch of extra features to it. But, you can expect us with FSx to offer other file systems as well. There’s about 35 different file systems that enterprises use. We can support many – really anything with FSx. But we will roll them out in order of priority by customer demand.

What about Amazon Elastic File System?

Vass: Elastic File System, which is our NFS 4 file system, has got single-digit millisecond response. That is actually replicating across three separate buildings with three different segments, striping it multiple times. EFS is an elastic multi-tenant file system. FSx is a single-tenant file system. To get microsecond latency, you have to be right there next to the CPU. You can’t have microsecond latency if you’re striping across three different buildings and then acknowledging that.

Do you plan to unify file storage? Or, do you plan to offer a myriad of choices?

Vass: Certainly, they’re all unified and can interoperate with each other. FSx, S3, intelligent tiering, all that kind of stuff, and EFS all work together. That’s already there. However, we don’t think file systems are one size fits all. There’s 35 different file systems, and the point of FSx is to let people have many choices, just like we have with databases or with CPUs or anything like this. You can’t move a load that’s running on GPFS into AWS without making changes for it. So you’d want to offer that as a file system. You can’t move an HPC load without something like FSx Lustre. You can’t move your Windows Home directories into AWS without FSx for Windows. And I would just expect more and more features around EFS, more and more features on S3, more and more features around FSx with more and more options for file systems.

So, you don’t envision unifying file storage.

Vass: There will be a central storage management system coming out where you’ll see it just like we have a central backup system now. So, they’ll be unified at that level. There’ll be a time when you’ll be able to access things with SMB, NFS and object in the same management console and on the same storage in the future. But that’s not really unified, right? Because you still want to have the single-tenant operating environment for your Windows. Microsoft does proprietary extensions on top of SMB, so you’ll need to run Windows underneath that. You can run something like [NetApp] OnTap, which also runs on AWS, by the way. And it does a great job of emulating NFS 4, 3, and SMB. But it’s never going to be 100% Windows compatible. So for that, you’re still going to want to run the Windows-native environment underneath.

I’d love to have one solution that did it all, but when you do that, what you usually end up with is something that does everything, but none of it well. So, you’re still going to want to have your high-performance object storage, block storage, elastic file systems and your single-tenant file systems for the foreseeable future. They’ll all interoperate with each other. They all get 11 nines durability by snapshotting or direct storing. You’re still going to have your archive storage. You don’t really want an archive system that operates the same as the file system or an object system.

How will the management console work to manage all the systems?

Vass: Since we unified backups with AWS Backup, you can take a look at that one place where we’re backing everything up in AWS. Now, we haven’t turned every service on. There’s actually 29 stateful stores in AWS. So, what we’re doing with backup is adding them one after another until they’re all there. You go to one place to back everything up.

We’ll add a storage management console. Today, you would go to an S3 console, an FSx console, an EFS console and a relational database console, then an Aurora console, then an EBS console. There’ll be one system management console that will let you see everything in one place and one place where you’ll be able to manage all of it as well. That’s scheduled for some time next year.

I’ve been hearing from enterprise customers that it can get confusing and overwhelming to keep track of the breadth of AWS storage offerings.

Vass: Let me counter that. We listen to our customers, and I guarantee you at Re:Invent this year, each customer I met with, one of those services that we added was really important to them, because remember, we’re moving everything from on prem to the cloud. … There are customers that want NFS 3 still. There’s customers that want NFS 4. There’s customers that want SMB and NTFS. There’s customers that want object storage. There’s customers that want block storage. There’s customers that want backups. If we did just one, and we told everyone rewrite your apps, it would take forever for people to move.

The best things people can do is get our roadmaps. We disclose our roadmaps under NDA to any customer that asks, and we’ll show them what’s coming and when it’s going to come so that they can have some idea if they’re planning and when we’re going to solve all of their problems. We’ve got 2.2 million customers, and all of them need something. And they have quite a variability of needs that we work to meet. So, it’s necessary to have that kind of innovation. And of course, we see things our customers do all the time.

So, AWS storage is basically going for the ocean and aiming to get every customer away from a traditional storage vendor.

Vass: I wouldn’t say it that way. I’d say we want to enable every customer to be able to use the cloud and Outpost and Snowball and Storage Gateway and all of our products so they can save money, be elastically scaling, have higher durability and better security than they usually do on prem.

Go to Original Article
Author:

New AWS cost management tool, instance tactics to cut cloud bills

LAS VEGAS — Amazon continuously rolls out new discounting programs and AWS cost management tools in an appeal to customers’ bottom lines and as a hedge against mounting competition from Microsoft and Google.

Companies have grappled with nasty surprises on their AWS bills for years, with the reasons attributed to AWS’ sheer complexity, as well as the runaway effect on-demand computing can engender without strong governance. It’s a thorny problem with a solution that can come in multiple forms.

To that end, the cloud giant released a number of new AWS cost management tools at re:Invent, including Compute Optimizer, which uses machine learning to help customers right-size their EC2 instances.

At the massive re:Invent conference here this week, AWS customers discussed how they use both AWS-native tools and their own methods to get the most value from their cloud budgets.

Ride-sharing service Lyft has committed to spend at least $300 million on AWS cloud services between the beginning of this year and the end of 2021.

Lyft, like rival Uber, saw a hockey stick-like growth spurt in recent years, going from about 50 million rides in 2015 to more than 350 million a few years later. But its AWS cost management needed serious work, said Patrick Valenzuela, engineering manager.

An initial effort to wrangle AWS costs resulted in a spreadsheet, powered by a Python script, that divided AWS spending by the number of rides given to reach an average figure. The spreadsheet also helped Lyft rank engineering teams according to their rate of AWS spending, which had a gamification effect as teams competed to do better, Valenzuela said in a presentation.

Within six months, Lyft managed to drop the AWS cost-per-ride figure by 40%. But it needed more, such as fine-grained data sets that could be probed via SQL queries. Other factors, such as discounts and the cost of AWS Reserved Instances, weren’t always reflected transparently in the AWS-provided cost usage reports used to build the spreadsheet.

Lyft subsequently built a second-generation tool that included a data pipeline fed into a data warehouse. It created a reporting and dashboard layer on top of that foundation. The results have been promising. Earlier this year, Lyft found it was now spending 50% less on read/writes for its top 25 DynamoDB tables and also saved 50% on spend related to Kubernetes container migrations.

 “If you want to learn more about AWS, I recommend digging into your bill,” Valenzuela said.

AWS cost management a perennial issue

While there are plenty of cloud cost management tools available in addition to the new AWS Compute Optimizer, some AWS customers take a proactive approach to cost savings, compared to using historical analysis to spot and shed waste, as Lyft did in the example presented at re:Invent.

Privately held mapping data provider Here Technologies serves 100 million motor vehicles and collects 28 TB of data each day. Companies have a choice in the cloud procurement process — one being to force teams through rigid sourcing activities, said Jason Fuller, head of cloud management and operations at Here.

“Or, you let the builders build,” he said during a re:Invent presentation. “We let the builders build.”

Still, Here had developed a complex landscape on AWS, with more than 500 accounts that collectively spun up more than 10 million EC2 instances a year. A few years ago, Here began a concerted effort to adopt AWS Reserved Instances in a programmatic manner, hoping to squeeze out waste.

Reserved Instances carry contract terms of up to three years and offer substantial savings over on-demand pricing. Here eventually moved nearly 80% of its EC2 usage into Reserved Instances, which gave it about 50% off the on-demand rate, Fuller said.

The results have been impressive. During the past three-and-a-half years, Here saved $50 million and avoided another $150 million in costs, Fuller said.

Salesforce is another heavy user of AWS. It signed a $400 million infrastructure deal with AWS in 2016 and the companies have since partnered on other areas. Based on its 2017 acquisition of Krux, Salesforce now offers Audience Studio, a data management platform that collects and analyzes vast amounts of audience information from various third-party sources. It’s aimed at marketers who want to run more effective digital advertising campaigns.

Audience Studio handles 200,000 user queries per second, supported by 2,500 Elastic MapReduce Clusters on AWS, said Alex Estrovitz, director of software engineering at Salesforce.

“That’s a lot of compute, and I don’t think we’d be doing it cost-effectively without using [AWS Spot Instances],” Estrovitz said in a re:Invent session. More than 85% of Audience Studio’s infrastructure uses Spot Instances, which are made up of idle compute resources on AWS and cost up to 90% less than on-demand pricing.

But Spot Instances are best suited for jobs like Audience Studio’s, where large amounts of data get parallel-processed in batches across large pools of instances. Spot Instances are ephemeral; AWS can shut them down upon a brief notice when the system needs resources for other customer jobs. However, customers like Salesforce can buy Spot Instances based on their application’s degree of tolerance for interruptions.

Salesforce has achieved 48% savings overall since migrating Audience Studio to Spot Instances, Estrovitz said. “If you multiply this over 2,500 jobs every day, we’ve saved an immense amount of money.”

Go to Original Article
Author:

SageMaker Studio makes model building, monitoring easier

LAS VEGAS — AWS launched a host of new tools and capabilities for Amazon SageMaker, AWS’ cloud platform for creating and deploying machine learning models; drawing the most notice was Amazon SageMaker Studio, a web-based integrated development platform.

In addition to SageMaker Studio, the IDE for platform for building, using and monitoring machine learning models, the other new AWS products aim to make it easier for non-expert developers to create models and to make them more explainable.

During a keynote presentation at the AWS re:Invent 2019  conference here Tuesday, AWS CEO Andy Jassy described five other new SageMaker tools: Experiments, Model Monitor, Autopilot, Notebooks and Debugger.

“SageMaker Studio along with SageMaker Experiments, SageMaker Model Monitor, SageMaker Autopilot and Sagemaker Debugger collectively add lots more lifecycle capabilities for the full ML [machine learning] lifecycle and to support teams,” said Mike Gualtieri, an analyst at Forrester.

New tools

SageMaker Studio, Jassy claimed, is a “fully-integrated development environment for machine learning.” The new platform pulls together all of SageMaker’s capabilities, along with code, notebooks and datasets, into one environment. AWS intends the platform to simplify SageMaker, enabling users to create, deploy, monitor, debug and manage models in one environment.

Google and Microsoft have similar machine learning IDEs, Gualtieri noted, adding that Google plans for its IDE to be based on DataFusion, its cloud-native data integration service, and to be connected to other Google services.

SageMaker Notebooks aims to make it easier to create and manage open source Jupyter notebooks. With elastic compute, users can create one-click notebooks, Jassy said. The new tool also enables users to more easily adjust compute power for their notebooks and transfer the content of a notebook.

Meanwhile, SageMaker Experiments automatically captures input parameters, configuration and results of developers’ machine learning models to make it simpler for developers to track different iterations of models, according to AWS. Experiments keeps all that information in one place and introduces a search function to comb through current and past model iterations.

AWS CEO Andy Jassy talks about new Amazon SageMaker capabilitiesatre:Invent 2019
AWS CEO Andy Jassy talks about new Amazon SageMaker capabilities at re:Invent 2019

“It is a much, much easier way to find, search for and collect your experiments when building a model,” Jassy said.

As the name suggests, SageMaker Debugger enables users to debug and profile their models more effectively. The tool collects and monitors key metrics from popular frameworks, and provides real-time metrics about accuracy and performance, potentially giving developers deeper insights into their own models. It is designed to make models more explainable for non-data scientists.

SageMaker Model Monitor also tries to make models more explainable by helping developers detect and fix concept drift, which refers to the evolution of data and data relationships over time. Unless models are updated in near real time, concept drift can drastically skew the accuracy of their outputs. Model Monitor constantly scans the data and model outputs to detect concept drift, alerting developers when it detects it and helping them identify the cause.

Automating model building

With Amazon SageMaker Autopilot, developers can automatically build models without, according to Jassy, sacrificing explainability.

Autopilot is “AutoML with full control and visibility,” he asserted. AutoML essentially is the process of automating machine learning modeling and development tools.

The new Autopilot module automatically selects the correct algorithm based on the available data and use case and then trains 50 unique models. Those models are then ranked by accuracy.

“AutoML is the future of ML development. I predict that within two years, 90 percent of all ML models will be created using AutoML by data scientists, developers and business analysts,” Gualtieri said.

SageMaker Autopilot is a must-have for AWS.
Mike GualtieriAnalyst, Forrester

“SageMaker Autopilot is a must-have for AWS, but it probably will help” other vendors also, including such AWS competitors as DataRobot because the AWS move further legitimizes the automated machine learning approach, he continued.

Other AWS rivals, including Google Cloud Platform, Microsoft Azure, IBM, SAS, RapidMiner, Aible and H2O.ai, also have automated machine learning capabilities, Gualtieri noted.

However, according to Nick McQuire, vice president at advisory firm CCS Insight, some of the  new AWS capabilities are innovative.

“Studio is a great complement to the other products as the single pane of glass developers and data scientists need and its incorporation of the new features, especially Model Monitor and Debugger, are among the first in the market,” he said.

“Although AWS may appear late to the game with Studio, what they are showing is pretty unique, especially the positioning of the IDE as similar to traditional software development with … Experiments, Debugger and Model Monitor being integrated into Studio,” McQuire said. “These are big jumps in the SageMaker capability on what’s out there in the market.”

Google also recently released several new tools aimed at delivering explainable AI, plus a new product suite, Google Cloud Explainable AI.

Go to Original Article
Author:

AWS Outposts brings hybrid cloud support — but only for Amazon

LAS VEGAS — AWS controls nearly half of the public IaaS market today, and based on the company’s rules against use of the term ‘multi-cloud,’ would be happy to have it all, even as rivals Microsoft and Google make incremental gains and more customers adopt multi-cloud strategies.

That’s the key takeaway from the start of this year’s massive re:Invent conference here this week, which was marked by the release of AWS Outposts for hybrid clouds and a lengthy keynote from AWS CEO Andy Jassy that began with a tongue-in-cheek invite to AWS’ big tent in the cloud.

“You have to decide what you’re going to bring,” Jassy said of customers who want to move workloads into the public cloud. “It’s a little bit like moving from a home,” he added, as a projected slide comically depicted moving boxes affixed with logos for rival vendors such as Oracle and IBM sitting on a driveway.

“It turns out when companies are making this big transformation, what we see is that all bets are off,” Jassy said. “They reconsider everything.”

For several years now, AWS has used re:Invent as a showcase for large customers in highly regulated industries that have made substantial, if not complete, migrations to its platform. One such company is Goldman Sachs, which has worked with AWS on several projects, including Marcus, a digital banking service for consumers. A transaction banking service that helps companies manage their cash in a cloud-native stack on AWS is coming next year, said Goldman Sachs CEO David Solomon, who appeared during Jassy’s talk. Goldman is also moving its Marquee market intelligence platform into production on AWS.

Along with showcasing enthusiastic customers like Goldman Sachs, Jassy took a series of shots at the competition, some veiled and others overt.

“Every industry has lots of companies with mainframes, but everyone wants to move off of them,” he claimed. The same goes for databases, he added. Customers are trying to move away from Oracle and Microsoft SQL Server due to factors such as expense and lock-in, he said. Jassy didn’t mention that similar accusations have been lodged at AWS’ native database services.

Jassy repeatedly took aim at Microsoft, which has the second most popular cloud platform after AWS, albeit with a significant lag. “People don’t want to pay the tax anymore for Windows,” he said.

But it isn’t as if AWS would actually shun Microsoft technology, since it has long been a host for many Windows Server workloads. In fact, it wants as much as it can get. This week, AWS introduced a new bring-your-own-license program for Windows Server and SQL Server designed to make it easier for customers to run those licenses on AWS, versus Azure.

AWS pushes hybrid cloud, but rejects multi-cloud

One of the more prominent, although long-expected, updates this week is the general availability of AWS Outposts. These specialized server racks provided by AWS reside in customers’ own data centers, in order to comply with regulations or meet low-latency needs. They are loaded with a range of AWS software, are fully managed by AWS and maintain continuous connections to local AWS regions.

The company is taking the AWS Outposts idea a bit further with the release of new AWS Local Zones. These will consist of Outpost machines placed in facilities very close to large cities, giving customers who don’t want or have their own data centers, but still have low-latency requirements, another option. Local Zones, the first of which is in the Los Angeles area, provide this capability and tie back to AWS’ larger regional zones, the company said.

Outposts, AWS Local Zones and the previously launched VMware Cloud on AWS constitute a hybrid cloud computing portfolio for AWS — but you won’t hear Jassy or other executives say the phrase multi-cloud, at least not in public.

In fact, partners who want to co-brand with AWS are forbidden from using that phrase and similar verbiage in marketing materials, according to an AWS co-branding document provided to SearchAWS.com.

“AWS does not allow or approve use of the terms ‘multi-cloud,’ ‘cross cloud,’ ‘any cloud, ‘every cloud,’ or any other language that implies designing or supporting more than one cloud provider,” the co-branding guidelines, released in August, state. “In this same vein, AWS will also not approve references to multiple cloud providers (by name, logo, or generically).”

An AWS spokesperson didn’t immediately reply to a request for comment.

The statement may not be surprising in context of AWS’s market lead, but does stand in contrast to recent approaches by Google, with the Anthos multi-cloud container management platform, and Microsoft’s Azure Arc, which uses native Azure tools, but has multi-cloud management aspects.

AWS customers may certainly want multi-cloud capabilities, but can protect themselves by using portable products and technologies, such as Kubernetes at the lowest level with a tradeoff being the manual labor involved, said Holger Mueller, an analyst with Constellation Research in Cupertino, Calif.

“To be fair, Azure and Google are only at the beginning of [multi-cloud],” he said.

Meanwhile, many AWS customers have apparently grown quite comfortable moving their IT estates onto the platform. One example is Cox Automotive, known for its digital properties such as Autotrader.com and Kelley Blue Book.

In total, Cox has more than 200 software applications, many of which it accrued through a series of acquisitions, and the company expects to move it all onto AWS, said Chris Dillon, VP of architecture, during a re:Invent presentation.

Cox is using AWS Well-Architected Framework, a best practices tool for deployments on AWS, to manage the transition.

“When you start something new and do it quickly you always run the risk of not doing it well,” said Gene Mahon, director of engineering operations. “We made a decision early on that everything would go through a Well-Architected review.”

Go to Original Article
Author:

New Amazon Kendra AI search tool indexes enterprise data

LAS VEGAS — Amazon Kendra, a new AI-driven search tool from the tech giant, is designed to enable organizations to automatically index business data, making it easily searchable using keywords and context.

Revealed during a keynote by AWS CEO Andy Jassy at the re:Invent 2019 user conference here,  Kendra relies on machine learning and natural language processing (NLP) to bring enhanced search capabilities to on-premises and cloud-based business data. The system is in preview.

“Kendra is enterprise search technology,” said Forrester analyst Mike Gualtieri. “But, unlike enterprise search technology of the past, it uses ML [machine learning] to understand the intent of questions and return more relevant results.”

Cognitive search

Forrester, he said, calls this type of technology “cognitive search.” Recent leaders in that market, according to a Forrester Wave report Gualtieri helped write, include intelligent search providers Coveo, Attivio, IBM, Lucidworks, Mindbreeze and Sinequa. Microsoft was also ranked highly in the report, which came out in May 2019. AWS is a new entrant in the niche.

“Search is often an area customers list as being broken especially across multiple data stores whether they be databases, office applications or SaaS,” said Nick McQuire, vice president at advisory firm CCS Insight.

Unlike enterprise search technology of the past, [Kendra] uses ML to understand the intent of questions and return more relevant results.
Mike GualtieriAnalyst, Forrester

While vendors such as IBM and Microsoft have similar products, “the fact that AWS is now among the first of the big tech firms to step into this area illustrates the scale of the challenge” to bring a tool like this to market, he said.

During his keynote, Jassy touted the intelligent search capabilities of Amazon Kendra, asserting that the technology will “totally change the value of the data” that enterprises have.

Setup of Kendra appears straightforward. Organizations will start by linking their storage accounts and providing answers to some of the questions their employees frequently query their data about. Kendra then indexes all the provided data and answers, using machine learning and NLP to attempt to understand the data’s context.

Understanding context

“We’re not just indexing the keywords inside the document here,” Jassy said.

AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019
AWS CEO Andy Jassy announced Kendra at AWS re:Invent 2019

Meanwhile, Kendra is “an interesting move especially since AWS doesn’t really have a range of SaaS application which generate a corpus of information that AI can improve for search,” McQuire said.

“But,” he continued, “this is part of a longer-term strategy where AWS has been focusing on specific business and industry applications for its AI.”

Jassy also unveiled new features for Amazon Connect, AWS’ omnichannel cloud contact center platform. With the launch of Contact Lens for Amazon Connect, users will be able to perform machine learning analytics on their customer contact center data. The platform will also enable users to automatically transcribe phone calls and intelligently search through them.

By mid-2020, Jassy said, Amazon Kendra will support real-time transcription and analysis of phone calls.

Go to Original Article
Author:

Salesforce acquisition of Tableau finally getting real

LAS VEGAS — It’s been more than five months since the Salesforce acquisition of Tableau was first revealed, but it’s been five months of waiting.

Even after the deal closed on Aug. 1, a regulatory review in the United Kingdom about how the Salesforce acquisition of Tableau might affect competition held up the integration of the two companies.

In fact, it wasn’t until last week on Nov. 5 after the go-ahead from the U.K. Competition and Markets Authority (CMA) — exactly a week before the start of Tableau Conference 2019, the vendor’s annual user conference — that Salesforce and Tableau were even allowed to start speaking with each other. Salesforce’s big Dreamforce 2019 conference is Nov. 19-22.

Meanwhile, Tableau didn’t just stop what it was doing. The analytics and business intelligence software vendor continued to introduce new products and update existing ones. Just before Tableau Conference 2019, it rolled out a series of new tools and product upgrades.

Perhaps most importantly, Tableau revealed an enhanced partnership agreement with Amazon Web Services entitled Modern Cloud Analytics that will help Tableau’s many on-premises users migrate to the cloud.

Andrew Beers, Tableau’s chief technology officer, discussed the recent swirl of events in a two-part Q&A.

In Part I, Beers reflected on Tableau’s product news, much of it centered on new data management capabilities and enhanced augmented intelligence powers. In Part II, he discusses the Salesforce acquisition of Tableau and what the future might look like now that the $15.7 billion purchase is no longer on hold.

Will the Salesforce acquisition of Tableau change Tableau in any way?

Andrew Beers: It would be naïve to assume that it wouldn’t. We are super excited about the acceleration that it’s going to offer us, both in terms of the customers we’re talking to and the technology that we have access to. There are a lot of opportunities for us to accelerate, and as [Salesforce CEO] Marc Benioff was saying [during the keynote speech] on Wednesday, the cultures of the two companies are really aligned, the vision about the future is really aligned, so I think overall it’s going to mean analytics inside businesses is just going to move faster.

Technologically speaking, are there any specific ways the Salesforce acquisition of Tableau might accelerate Tableau’s capabilities?

Andrew BeersAndrew Beers

Beers: It’s hard to say right now. Just last week the CMA [order] was lifted. There was a big cheer, and then everyone said, ‘But wait, we have two conferences to put on.’

Have you had any strategic conversations with Salesforce in just the week or so since regulatory restrictions were lifted, even though Tableau Conference 2019 is this week and Salesforce Dreamforce 2019 is next week?

Beers: Oh sure, and a lot of it has been about the conferences of course, but there’s been some early planning on how to take some steps together. But it’s still super early.

Users, of course, fear somewhat that what they love about Tableau might get lost as a result of the Salesforce acquisition of Tableau. What can you say to alleviate their worries?

Beers: The community that Tableau has built, and the community that Salesforce has built, they’re both these really excited and empowered communities, and that goes back to the cultural alignment of the companies. As a member of the Tableau community, I would encourage people to be excited. To have two companies come together that have similar views on the importance of the community, the product line, the ecosystem that the company is trying to create, it’s exciting.

Is the long-term plan — the long-term expectation — for Tableau to remain autonomous under Salesforce?

We’ve gone into this saying that Tableau is going to continue to operate as Tableau, but long-term, I can’t answer that question. It’s really hard for anyone to say.
Andrew BeersChief technology officer, Tableau

Beers: We’ve gone into this saying that Tableau is going to continue to operate as Tableau, but long-term, I can’t answer that question. It’s really hard for anyone to say.

From a technological perspective, as a technology officer, what about the Salesforce acquisition of Tableau excites you — what are some things that Salesforce does that you can’t wait to get access to?

Beers: Salesforce spent the past 10 or so years changing into a different company, and I’m not sure a lot of people noticed. They went from being a CRM company to being this digital-suite-for-the-enterprise company, so they’ve got a lot of interesting technology. Just thinking of analytics, they’ve built some cool stuff with Einstein. What does that mean when you bring it into the Tableau environment? I don’t know, but I’m excited to find out. They’ve got some interesting tools that hold their hold ecosystem together, and I’m interested in what that means for analysts and for Tableau. I think there are a lot of exciting technology topics ahead of us.

What about conversations you might have with Salesforce technology officers, learning from one another. Is that exciting?

Beers: It’s definitely exciting. They’ve been around — a lot of that team has different experience than us. They’re experienced technology leaders in this space and I’m definitely looking forward to learning from their wisdom. They have a whole research group that’s dedicated to some of their longer term ideas, so I’m looking forward to learning from them.

You mentioned Einstein Analytics — do Tableau and Einstein conflict? Are they at odds in any way, or do they meld in a good way?

Beers: It’s still early days, but I think you’re going to find that they’re going to meld in a good way.

What else can you tell the Tableau community about what the future holds after the Salesforce acquisition of Tableau?

Beers: We’re going to keep focused on what we’ve been focusing on for a long time. We’re here to bring interesting innovations to market to help people work with their data, and that’s something that’s going to continue. You heard Marc Benioff and [Tableau CEO Adam Selipsky] talk about their excitement around that [during a conference keynote]. Our identity as a product and innovation company doesn’t change, it just gets juiced by this. We’re ready to go — after the conferences are done.

Go to Original Article
Author:

Tableau analytics platform upgrades driven by user needs

LAS VEGAS — Tableau revealed a host of additions and upgrades to the Tableau analytics platform in the days both before and during Tableau Conference 2019.

Less than a week before its annual user conference, the vendor released Tableau 2019.4, a scheduled update of the Tableau analytics platform. And during the conference, Tableau unveiled not only new products and updates to existing ones, but also an enhanced partnership with Amazon Web Services to help users move to the cloud and a new partner network.

Many of the additions to the Tableau analytics platform have to do with data management, an area Tableau only recently began to explore. Among them are Tableau Catalog and Prep Conductor.

Others, meanwhile, are centered on augmented analytics, including Ask Data and Explain Data.

All of these enhancements to the Tableau analytics platform come in the wake of the news last June that Tableau was acquired by Salesforce, a deal that closed on Aug. 1 but was held up until just last week by a regulatory review in the United Kingdom looking at what effect the combination of the two companies would have on competition.

In a two-part Q&A, Andrew Beers, Tableau’s chief technology officer, discussed the new and enhanced products in the Tableau analytics platform as well as how Tableau and Salesforce will work together.

Part I focuses on data management and AI products in the Tableau analytics platform, while Part II centers on the relationship between Salesforce and Tableau.

Data management has been a theme of new products and upgrades to the Tableau analytics platform — what led Tableau in that direction?

Andrew BeersAndrew Beers

Andrew Beers: We’ve been about self-service analysis for a long time. Early themes out of the Tableau product line were putting the right tools in the hands of the people that were in the business that had the data and had the questions, and didn’t need someone standing between them and getting the answers to those questions. As that started to become really successful, then you had what happens in every self-service culture — dealing with all of this content that’s out there, all of this data that’s out there. We helped by introducing a prep product. But then you had people that were generating dashboards, generating data sets, and then we said, ‘To stick to our belief in self-service we’ve got to do something in the data management space, so what would a user-facing prep solution look like, an operationalization solution look like, a catalog solution look like?’ And that’s what started our thinking about all these various capabilities.

Along those lines, what’s the roadmap for the next few years?

Beers: We always have things that are in the works. We are at the beginning of several efforts — Tableau Prep is a baby product that’s a year and a half old. Conductor is just a couple of releases old. You’re going to see a lot of upgrades to those products and along those themes — how do you make prep easier and more approachable, how do you give your business the insight into the data and how it is being used, and how do you manage it? That’s tooling we haven’t built out that far yet. Once you have all of this knowledge and you’ve given people insights, which is a key ingredient in governance along with how to manage it in a self-service way, you’ll start to see the Catalog product grow into ideas like that.

Are these products something customers asked for, or are they products Tableau decided to develop on its own?

Beers: It’s always a combination. From the beginning we’ve listened to what our customers are saying. Sometimes they’re saying, ‘I want something that looks like this,’ but often they’re telling us, ‘Here is the kind of problem we’re facing, and here are the challenges we’re facing in our organization,’ and when you start to hear similar stories enough you generalize that the customers really need something in this space. And this is really how all of our product invention happens. It’s by listening to the intent behind what the customer is saying and then inventing the products or the new capabilities that will take the customer in a direction we think they need to go.

Shifting from data management to augmented intelligence, that’s been a theme of another set of products. Where did the motivation come from to infuse more natural language processing and machine learning into the Tableau analytics platform?

Beers: It’s a similar story here, just listening to customers and hearing them wanting to take the insights that their more analyst-style users got from Tableau to a larger part of the organization, which always leads you down the path of trying to figure out how to add more intelligence into the product. That’s not new for Tableau. In the beginning we said, ‘We want to build this tool for everyone,’ but if I’m building it for everyone I can’t assume that you know SQL, that you know color design, that you know how to tell a good story, so we had to build all those in there and then let users depart from that. With these smart things, it’s how can I extend that to letting people get different kinds of value from their question. We have a researcher in the NLP space who was seeing these signals a while ago and she started prototyping some of these ideas about how to bring natural language questioning into an analytical workspace, and that really inspired us to look deeply at the space and led us to think about acquisitions..

What’s the roadmap for Tableau’s AI capabilities?

With the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.
Andrew BeersChief technology officer, Tableau

Beers: You’re going to see these AI and machine learning-style capabilities really in every layer of the product stack we have. We showed two [at the conference] — Ask Data and Explain Data — that are very much targeted at the analyst, but you’ll see it integrated into the data prep products. We’ve got some smarts in there already. We’ve added Recommendations, which is how to take the wisdom of the crowd, of the people that are at your business, to help you find things that you wouldn’t normally find or help you do operations that you yourself haven’t done yet but that your community around have done. You’re going to see that all over the product in little ways to make it easier to use and to expand the kinds of people that can do those operations.

As a technology officer, how fun is this kind of stuff for you?

Beers: It’s really exciting. It’s all kinds of fun things that we can do. I’ve always loved the mission of the company, how people see and understand data, because we can do this for decades. There’s so much interesting work ahead of us. As someone who’s focused on the technology, the problems are just super interesting, and I think with the way tech has been developing around things like AI and machine learning, there are just all kinds of new techniques that are available to us that weren’t mainstream enough 10 years ago to be pulling into the product.

Go to Original Article
Author: