Tag Archives: Amazon

New themed Amazon pop-up stores built on consumer data

After closing some 90 pop-up stores over the course of last year, Amazon appears ready to take another stab at the concept with plans to open a chain of themed Amazon pop-up stores with inventory in each store being regularly swapped out as part of rotating themes.

The company has established, or is in the process of establishing, five Amazon pop-up stores this year in or around major metropolitan areas including Las Vegas, Los Angeles, Denver, Houston and Chicago. The sixth location will be in Seattle, next door to Amazon’s corporate headquarters and an Amazon 4-star store, as the company continues its experiment to find the right mix of physical locations. According to Amazon’s website, the new retail stores will serve as “physical extensions of Amazon.com.”

One example of a theme in the Las Vegas store is a focus on cameras. Other themes that have been explored in Amazon pop-up stores include Barbie’s 50th anniversary, Marvel’s Avengers, an Audible reading room, the Food Network and a holiday toy list.

Amazon stores built on consumer data

Amazon’s themed physical stores add to the 26 Amazon Go locations in place or being renovated, 22 Amazon Books stores, 18 Amazon 4-star stores, two AmazonFresh Pickup stores and hundreds of Whole Foods stores. In the next month or two Amazon is set to debut a new chain of grocery stores in the Los Angeles area.

“Amazon is continually iterating with its physical locations, so it will be interesting to see where they end up landing with these different formats,” said Thomas O’Connor, a senior director with Gartner. “They can leverage all the data collected in these stores to more clearly see where there is an opportunity [to] further scale out.  Also, it is another opportunity to go after shoppers who don’t yet have Amazon Prime memberships.”

Another analyst agreed that data, again, will play an integral role in the potential success of the latest Amazon pop-up stores. Not only can Amazon collect more specific data on what customers prefer in certain locations, but the company can apply data it already has in hand about what customers might prefer in a certain zip codes with data collected as part of its 4-star store launches.

This fits the method of operation Jeff Bezos has of taking data and not being afraid to experiment. That’s what these themed pop-up stores say to me.
Guy CourtinFormer vice president of industry strategy, Infor

“This fits the method of operation [Amazon CEO Jeff] Bezos has of taking data and not being afraid to experiment; that’s what these themed pop-up stores says to me,” said Guy Courtin, a former vice president of industry strategy at Infor. “He’ll use the demographic data in those areas he wants to put in (a pop-up store), and if it does well then great, he’ll milk those revenues. If it doesn’t do well, he will pull the plug quickly. It’s a bit like the Halloween stores that pop up for Halloween season and then they’re gone,” he said.

The new pop-up stores remind Courtin of the kiosks companies such as AT&T and Verizon set up in malls to sign up random customers for their respective cellular services, only Amazon is looking to sign up customers for Prime memberships, products and services.

“Once they get you in the store, they are looking to sell you on [Amazon] Prime giving you access to their streaming video and music services, along with whatever themed products they have in a particular store,” Courtin said. “They [Amazon] are masters at locating and capturing new revenue streams.”

Amazon’s themed pop-ups give malls hope

With many mall management companies desperate for revenues from renters, Courtin and other analysts believe Amazon’s pop-up stores will be welcome additions — even if they only stay for a few months at a time and continually swap out inventories with every “theme” change.

“Mall management companies are losing their big anchor tenants like a Sears and others,” Courtin said. “If I’m a mall management company and can get Amazon in there for even two or three months, not only will Amazon benefit, but a dozen other stores right next to the Amazon stores will benefit. Also, it gives mall management companies the opportunity to look more modern to have a giant retailer in their location,” he said.

According to the company’s latest earnings report, physical stores account for about 6% of Amazon’s $70 billion in revenue.

Amazon officials declined to provide comment for this story.

Go to Original Article
Author:

AWS leak exposes passwords, private keys on GitHub

An Amazon Web Services engineer uploaded sensitive data to a public GitHub repository that included customer credentials and private encryption keys.

Cybersecurity vendor UpGuard earlier this month found the exposed GitHub repository within 30 minutes of its creation. UpGuard analysts discovered the AWS leak, which was slightly less than 1 GB and contained log files and resource templates that included hostnames for “likely” AWS customers.

“Of greater concern, however, were the many credentials found in the repository,” UpGuard said in its report Thursday. “Several documents contained access keys for various cloud services. There were multiple AWS key pairs, including one named ‘rootkey.csv,’ suggesting it provided root access to the user’s AWS account.”

The AWS leak also contained a file for an unnamed insurance company that included keys for email and messaging providers, as well as other files containing authentication tokens and API keys for third-party providers. UpGuard’s report did not specify how many AWS customers were affected by the leak.

UpGuard said GitHub’s token scanning feature, which is opt-in, could have detected and automatically revoked some of the exposed credentials in the repository, but it’s unclear how quickly detection would have occurred. The vendor also said the token scanning tool would not have been able to revoke exposed passwords or private keys.

The documents in the AWS leak also bore the hallmarks of an AWS engineer, and some of the documents included the owner’s name. UpGuard said it found a LinkedIn profile for an AWS engineer that matched the owner’s exact full name, and the role matched the types of data found in the repository; as a result, the vendor said it was confident the owner was an AWS engineer.

While it’s unclear why the engineer uploaded such sensitive material to a public GitHub repository, UpGuard said there was “no evidence that the user acted maliciously or that any personal data for end users was affected, in part because it was detected by UpGuard and remediated by AWS so quickly.”

UpGuard said at approximately 11 a.m. on Jan. 13, its data leaks detection engine identified potentially sensitive information had been uploaded to the GitHub repository half an hour earlier. UpGuard analysts reviewed the documents and determined the sensitive nature of the data as well as the identity of the likely owner. An analyst contacted AWS’ security team at 1:18 p.m. about the leak, and by 4 p.m. public access to the repository had been removed. SearchSecurity contacted AWS for comment, but at press time the company had not responded.

Go to Original Article
Author:

AWS security faces challenges after a decade of dominance

Amazon Web Services has a stranglehold on the public cloud market, but the company’s dominance in cloud security is facing new challenges.

The world’s largest cloud provider earned a reputation over the last 10 years as an influential leader in IaaS security, thanks to introducing products such as AWS Identity & Access Management and Key Management Service in the earlier part of the decade to more recent developments in event-driven security. AWS security features helped the cloud service provider establish its powerful market position; according to Gartner, AWS in 2018 earned an estimated $15.5 billion in revenue for nearly 48% of the worldwide public IaaS market.

But at the re:Invent 2019 conference last month, many of the new security tools and features announced were designed to fix existing issues, such as misconfigurations and data exposures, rather than push AWS security to new heights. “There wasn’t much at re:Invent that I’d call security,” said Colin Percival, founder of open source backup service Tarsnap and an AWS Community Hero, via email. “Most of what people are talking about as security improvements address what I’d call misconfiguration risk.”

Meanwhile, Microsoft has not only increased its cloud market share but also invested heavily in new Azure security features that some believe rival AWS’ offerings. Rich Mogull, president and analyst at Securosis, said there are two sides to AWS security — the inherent security of the platform’s architecture, and the additional tools and products AWS provides to customers.

“In terms of the inherent security of the platform, I still think Amazon is very far ahead,” he said, citing AWS’ strengths such as availability zones, segregation, and granular identity and access management. “Microsoft has done a lot with Azure, but Amazon still has a multi-year lead. But when it comes to security products, it’s more of a mixed bag.”

Most of what people are talking about as [AWS] security improvements address what I’d call misconfiguration risk.
Colin PercivalFounder, Tarsnap

Microsoft has been able to close the gap in recent years with the introduction of its own set of products and tools that compete with AWS security offerings, he said. “Azure Security Center and AWS Security Hub are pretty comparable, and both have strengths and weaknesses,” Mogull said. “Azure Sentinel is quite interesting and seems more complete than AWS Detective.”

New tools, old problems

Arguably the biggest AWS security development at re:Invent was a new tool designed to fix a persistent problem for the cloud provider: accidental S3 bucket exposures. The IAM Access Analyzer, which is part of AWS’ Identity and Access Management (IAM) console, alerts users when an S3 bucket is possibly misconfigured to allow public access via the internet and lets them block such access with one click.

AWS had previously made smaller moves, including changes to S3 security settings and interfaces, to curb the spate of high-profile and embarrassing S3 exposures in recent years. IAM Access Analyzer is arguably the strongest move yet to resolve the ongoing problem.

“They created the S3 exposure issue, but they also fixed it,” said Jerry Gamblin, principal security engineer at vulnerability management vendor Kenna Security, which is an AWS customer. “I think they’ve really stepped up in that regard.”

Still, some AWS experts feel the tool doesn’t fully resolve the problem. “Tools like IAM Access Analyzer will definitely help some people,” Percival said, “but there’s a big difference between warning people that they screwed up and allowing people to make systems more secure than they could previously.”

Scott Piper, an AWS security consultant and founder of Summit Route in Salt Lake City, said “It’s yet another tool in the toolbelt and it’s free, but it’s not enabled by default.”

There are other issues with IAM Access Analyzer. “With this additional information, you have to get that to the customer in some way,” Piper said. “And doing that can be awkward and difficult with this service and others in AWS like GuardDuty, because it doesn’t make cross-region communication very easy.”

For example, EC2 regions are isolated to ensure the highest possible fault tolerance and stability for customers. But Piper said the isolation presents challenges for customers using multiple regions because it’s difficult to aggregate GuardDuty alerts to a single source, which requires security teams to analyze “multiple panes of glass instead of one.”

Metadata headaches

AWS recently addressed another security issue that became a high-profile concern for enterprises following the Capital One breach last summer. The attacker in that exploited an SSRF vulnerability to access the AWS metadata service for company’s EC2 instances, which allowed them to obtain credentials contained in the service.

The Capital One breach led to criticism from security experts as well as lawmakers such as Sen. Ron Wyden (D-Ore.), who questioned why AWS hadn’t addressed SSRF vulnerabilities for its metadata service. The lack of security around the metadata service has concerned some AWS experts for years; in 2016, Percival penned a blog post titled “EC2’s most dangerous feature.”

“I think the biggest problem Amazon has had in recent years — judging by the customers affected — is the lack of security around their instance metadata service,” Percival told SearchSecurity.

In November, AWS made several updates to the metadata service to prevent unauthorized access, including the option to turn off access to the service altogether. Mogull said the metadata service update was crucial because it improved security around AWS account credentials.

But like other AWS security features, the metadata service changes are not enabled by default. Percival said enabling the update by default would’ve caused issues for enterprise applications and services that rely on the existing version of the service. “Amazon was absolutely right in making their changes opt-in since if they had done otherwise, they would have broken all of the existing code that uses the service,” he said. “I imagine that once more or less everyone’s code has been updated, they’ll switch this from opt-in to opt-out — but it will take years before we get to that point.”

Percival also said the update is “incomplete” because it addresses common misconfigurations but not software bugs. (Percival is working on an open source tool that he says will provide “a far more comprehensive fix to this problem,” which he hopes to release later this month.)

Still, Piper said the metadata service update is an important step for AWS security because it showed the cloud provider was willing to acknowledge there was a problem with the existing service. That willingness and responsiveness hasn’t always been there in the past, he said.

“AWS has historically had the philosophy of providing tools to customers, and it’s kind of up to customers to use them and if they shoot themselves in the foot, then it’s the customers’ fault,” Piper said. “I think AWS is starting to improve and change that philosophy to help customers more.”

AWS security’s road ahead

While the metadata service update and IAM Access Analyzer addressed lingering security issues, experts highlighted other new developments that could strengthen AWS’ position in cloud security.

AWS Nitro Enclaves, for example, is a new EC2 capability introduced at re:Invent 2019 that allows customers to create isolated instances for sensitive data. The Nitro Enclaves, which will be available in preview this year, are virtual machines attached to EC2 instances but have CPU and memory isolation from the instances and can be accessed only through secure local connections.

“Nitro Enclaves will have a big impact for customers because of its isolation and compartmentalization capabilities” which will give enterprises’ sensitive data an additional layer of protection against potential breaches, Mogull said.

Percival agreed that Nitro Enclaves could possibly “raise the ceiling,” for AWS Security, though he cautioned against using them. “Enclaves are famously difficult for people to use correctly, so it’s hard to predict whether they will make a big difference or end up being another of the many ‘Amazon also has this feature, which nobody ever uses’ footnotes.”

Experts also said AWS’ move to strengthen its ARM-based processor business could have major security implications. The cloud provider announced at re:Invent 2019 that it will be launching EC2 instances that run on its new, customized ARM chips, dubbed Graviton2.

Gamblin said the Graviton2 processors are a security play in part because of recent microprocessor vulnerabilities and side channel attacks like Meltdown and Spectre. While some ARM chips were affected by both Meltdown and Spectre, subsequent side channel attacks and Spectre variants have largely affected x86 processors.

“Amazon doesn’t want to rely on other chips that may be vulnerable to side channel attacks and may have to be taken offline and rebooted or suffer performance issues because of mitigations,” Gamblin said.

Percival said he was excited by the possibility of the cloud provider participating in ARM’s work on the “Digital Security by Design” initiative, a private-sector partnership with the UK that is focused in part on fundamentally restructuring — and improving — processor security. The results of that project will be years down the road, Percival said, but it would show a commitment from AWS to once again raising the bar for security.

“If it works out — and it’s a decade-long project, which is inherently experimental in nature — it could be the biggest step forward for computer security in a generation.”

Go to Original Article
Author:

AWS Outposts vs. Azure Stack vs. HCI

Giants Amazon and Microsoft offer cloud products and services that compete in areas usually reserved for the strengths that traditional hyper-converged infrastructure platforms bring to the enterprise IT table. These include hybrid cloud offerings AWS Outposts, which Amazon made generally available late last year, and Azure Stack from Microsoft.

An integrated hardware and software offering, Azure Stack is designed to deliver Microsoft Azure public cloud services to enable enterprises to construct hybrid clouds in a local data center. It delivers IaaS and PaaS for organizations developing web apps. By sharing its code, APIs and management portal with Microsoft Azure, Azure Stack provides a common platform to address hybrid cloud issues, such as maintaining consistency between cloud and on-premises environments. Stack is for those who want the benefits of a cloud-like platform but must keep certain data private due to regulations or some other constraint.

AWS Outposts is Amazon’s on-premises version of its IaaS offering. Amazon targets AWS Outposts at those who want to run workloads on Amazon Web Services, but instead of in the cloud, do so inside their own data centers to better meet regulatory requirements and, for example, to reduce latency.

Let’s delve deeper into AWS Outposts vs. Azure Stack to better see how they compete with each other and your typical hyper-converged infrastructure (HCI) deployment.

hybrid cloud storage use cases

What is AWS Outposts?

AWS Outposts is Amazon’s acknowledgment that most enterprise class organizations prefer hybrid cloud to a public cloud-only model. Amazon generally has acted solely as a hyperscale public cloud provider, leaving its customers’ data center hardware needs for other vendors to handle. With AWS Outposts, however, Amazon is — for the first time — making its own appliances available for on-premises use.

AWS Outposts customers can run AWS on premises. They can also extend their AWS virtual private clouds into their on-premises environments, so a single virtual private cloud can contain both cloud and data center resources. That way, workloads with low-latency or geographical requirements can remain on premises while other workloads run in the Amazon cloud. Because Outposts is essentially an on-premises extension of the Amazon cloud, it also aims to ease the migration of workloads between the data center and the cloud.

What is Microsoft Azure Stack?

Although initially marketed as simply a way to host Azure services on premises, Azure Stack has evolved into a portfolio of products. The three products that make up the Azure Stack portfolio include Azure Stack Edge, Azure Stack Hub and Azure Stack HCI.

Azure Stack Edge is a cloud-managed appliance that enables you to run managed virtual machine (VM) and container workloads on premises. While this can also be done with Windows Server, the benefit to using Azure Stack Edge is workloads can be managed with a common tool set, whether they’re running on premises or in the cloud.

Azure Stack Hub is used for running cloud applications on premises. It’s mostly for situations in which data sovereignty is required or where connectivity isn’t available.

As its name implies, Azure Stack HCI is a version of Azure Stack that runs on HCI hardware.

AWS Outposts vs. Azure Stack vs. HCI

To appreciate how AWS Outposts competes with traditional HCI, consider common HCI use cases. HCI is often used as a virtualization platform. While AWS Outposts will presumably be able to host Elastic Compute Cloud virtual machine instances, the bigger news is that Amazon is preparing to release a VMware-specific version of Outposts in 2020. The VMware Cloud on AWS Outposts will allow a managed VMware software-defined data center to run on the Outposts infrastructure.

Organizations are also increasingly using HCI as a disaster recovery platform. While Amazon isn’t marketing Outposts as a DR tool, the fact that Outposts acts as a gateway between on-premises services and services running in the Amazon cloud means the platform will likely be well positioned as a DR enabler.

Many organizations have adopted hyper-converged systems as a platform for running VMs and containers. Azure Stack Edge may end up displacing some of those HCIs if an organization is already hosting VMs and containers in the Azure cloud. As for Azure Stack Hub, it seems unlikely that it will directly compete with HCI, except possibly in some specific branch office scenarios.

The member of the Azure Stack portfolio that’s most likely to compete with traditional hyper-convergence is Azure Stack HCI. It’s designed to run scalable VMs and provide those VMs with connectivity to Azure cloud services. These systems are being marketed for use in branch offices and with high-performance workloads.

Unlike first-generation HCI systems, Azure Stack HCI will provide scalability for both compute and storage. This could make it a viable replacement for traditional HCI platforms.

In summary, when it comes to AWS Outposts vs. Azure Stack or standard hyper-convergence, all three platforms have their merits, without any one being clearly superior to the others. If an organization is trying to choose between the three, then my advice would be to choose the platform that does the best job of meshing with the existing infrastructure and the organization’s operational requirements. If the organization already has a significant AWS or Azure footprint, then Outposts or Azure Stack would probably be a better fit, respectively. Otherwise, traditional HCI is probably going to entail less of a learning curve and may also end up being less expensive.

Go to Original Article
Author:

AWS AI tools focus on developers

AWS is the undisputed leader in the cloud market. As for AI, the cloud division of tech giant Amazon is also in a dominant position.

“Machine learning is at a place now where it is accessible enough that you don’t need Ph.Ds,” said Joel Minnick, head of product marketing for AI, machine learning and deep learning at AWS.

Partly, that’s due to a natural evolution of the technology, but vendors such as Google, AWS, IBM, DataRobot and others have made strides in making the process of creating and deploying machine learning and deep learning easier.

AWS AI

Over the last few years, AWS has invested heavily in making it easier for developers and engineers to create and deploy AI models, Minnick said, speaking with TechTarget at the AWS re:Invent 2019 user conference in Las Vegas in December 2019.

AWS’ efforts to simplify the machine leaning lifecycle were on full display at re:Invent. During the opening keynote, led by AWS CEO Andy Jassy, AWS revealed new products and updates for Amazon SageMaker, AWS’ full-service suite of machine learning development, deployment and governance products.

Those products and updates included new and enhanced tools for creating and managing notebooks, automatically making machine learning models, debugging models and monitoring models.

SageMaker Autopilot, a new AutoML product, in particular, presents an accessible way for users who are new to machine learning to create and deploy models, according to Minnick.

In general, SageMaker is one of AWS’ most important products, according to a blog-post-styled report on re:Invent from Nick McQuire, vice president of enterprise research at CCS Insight. The report noted that AWS, due largely to SageMaker, its machine learning-focused cloud services, and a range of edge and robotics products, is a clear leader in the AI space.

“Few companies (if any) are outpacing AWS in machine learning in 2019,” McQuire wrote, noting that SageMaker alone received 150 updates since the start of 2018.

Developers for AWS AI

In addition to the SageMaker updates, AWS in December unveiled another new product in its Deep series: DeepComposer.

The product series, which also includes DeepLens and DeepRacer, is aimed at giving machine learning and deep learning newcomers a simplified and visual means to create specialized models.

Introduced in late 2017, DeepLens is a camera that enables users to run deep learning models on it locally. The camera, which is fully programmable with AWS Lambda, comes with tutorials and sample projects to help new users. It integrates with a range of AWS products and services, including SageMaker and its Amazon Rekognition image analysis service.

“[DeepLens] was a big hit,” said Mike Miller, director of AWS AI Devices at AWS.

DeepRacer, revealed the following year, enables users to apply machine learning models to radio controlled (RC) model cars and make them autonomously race along tracks. Users can build models in SageMaker and bring them into a simulated racetrack, where they can train the models before bringing them into a 1/18th scale race car.

An AWS racing league makes DeepRacer competitive, with AWS holding yearlong tournaments comprised of multiple races. DeepRacer, Miller declared, has been exceedingly successful.

“Tons of customers around the world have been using DeepRacer to engage and upskill their employees,” Miller said.

Dave Anderson, director of technology at Liberty Information Technology, the IT arm of Liberty Mutual, said many people on his team take part in the DeepRacer tournaments.

“It’s a really fun way to learn machine learning,” Anderson said in an interview. “It’s good fun.”

Composing with AI

Meanwhile, DeepComposer as the name suggests, helps train users on machine learning and deep learning through music. The product comes with a small keyboard that can plug into a PC along with a set of pretrained music genre models. The keyboard itself isn’t unusual, but by using the models and accompanying software, users automatically create and tweak fairly basic pieces of music within a few genres.

With DeepComposer, along with DeepLens and Deep Racer, “developers of any skill level can find a perch,” Miller said.

The products fit into Amazon’s overall AI strategy well, he said.

“For the last 20 years, Amazon has been investing in machine learning,” Miller said. “Our goal is to bring those same AI and machine learning techniques to developers of all types.”

The Deep products are just “the tip of the spear for aspiring machine learning developers,” Miller said. Amazon’s other products, such as SageMaker, extend that machine learning technology development strategy.

“We’re super excited to get more machine learning into the hands of more developers,” Miller said.

Go to Original Article
Author:

How Amazon HR influences hiring trends

Amazon is a powerhouse when it comes to recruiting. It hires at an incredible pace and may be shaping how other firms hire, pay and find workers. But it also offers a cautionary tale, especially in the use of AI.

Amazon HR faces a daunting task. The firm is adding thousands of employees each quarter through direct hiring and acquisitions. In the first quarter of 2019, it reported having 630,000 full and part-time employees. By the third quarter, that number rose 19% to 750,000 employees.

Amazon’s hiring strategy includes heavy use of remote workers or flex jobs, including a program called CamperForce. The program was designed for nomadic people who live full or part-time in recreational vehicles. They help staff warehouses during peak retail seasons.

Amazon’s leadership in remote jobs can be measured by FlexJobs, a site that specializes in connecting professionals to remote work. Amazon ranked sixth this year out of the 100 top companies with remote jobs. FlexJobs’ rankings are based on data from some 51,000 firms. The volume of job ads determines ranking.

The influence of large employers

Amazon’s use of remote work is influential, said Brie Reynolds, career development manager and coach at FlexJobs. There is “a lot of value in seeing a large, well-known company — a successful company — employing remote workers,” she said.

In April, Amazon CEO Jeff Bezos challenged other retailers to raise their minimum wage to $15, which is what Amazon did in 2018. “Better yet, go to $16 and throw the gauntlet back at us,” said Bezos, in his annual letter to shareholders.

But the impact of Amazon’s wage increase also raises questions.

“Amazon is such a large employer that increases for Amazon’s warehouse employees could easily have a large spillover effect raising wage norms among employers in similar industries and the same local area,” said Michael Reich, a labor market expert and a professor of economics at the University of California at Berkeley. But without more data from Amazon and other companies in the warehouse sector, he said it’s difficult to tell where the evidence falls.

Amazon HR’s experience with AI in recruiting may also be influential, but as a warning.

The warning from Amazon

In late 2018, Reuters reported that Amazon HR developed an algorithm for hiring technical workers. But because of its training, the algorithm was recommending men over women. The technical workforce suffers from a large gender gap.

The Amazon experience “shows that all historical data contains an observable bias,” said John Sumser, principal analyst at HRExaminer. “In the Amazon case, utilizing historical data perpetuated the historical norm — a largely male technical workforce.”

Any AI built on anything other than historical data runs the distinct risk of corrupting the culture of the client, Sumser said.

In July, Amazon said it would spend $700 million to upskill 100,000 U.S. workers through 2025. The training program amounts to about $1,000 a year per employee, which may be well less than Amazon HR’s cost of hiring new employees.

They’re not taking advantage of the opportunity to be a role model.
Josh BersinIndependent HR analyst

In late 2018, Amazon HR’s talent acquisition team had more than 3,500 people. The company is interested in new HR tech and takes time to meet with vendors, said an Amazon recruiting official at the HR Technology Conference and Expo.

But Amazon, overall, doesn’t say much about its HR practices and that may be tempering the company’s influence, said Josh Bersin, an independent HR analyst.

Bersin doesn’t believe the industry is following Amazon. And part of his belief is due to the company’s Apple-like secrecy on internal operations, he said.

“I think people are interested in what they’re doing, and they probably are doing some really good things,” Bersin said. “But they’re not taking advantage of the opportunity to be a role model.”

Go to Original Article
Author:

AWS storage changes the game inside, outside data centers

The impact of Amazon storage on the IT universe extends beyond the servers and drives that store exabytes of data on demand for more than 2.2 million customers. AWS also influenced practitioners to think differently about storage and change the way they operate.

Since Amazon Simple Storage Service (S3) launched in March 2006, IT pros have re-examined the way they buy, provision and manage storage. Infrastructure vendors have adapted the way they design and price their products. That first AWS storage service also sparked a raft of technology companies — most notably Microsoft and Google — to focus on public clouds.

“For IT shops, we had to think of ourselves as just another service provider to our internal business customers,” said Doug Knight, who manages storage and server services at Capital BlueCross in central Pennsylvania. “If we didn’t provide good customer service, performance, availability and all those things that you would expect out of an AWS, they didn’t have to use us anymore.

“That was the reality of the cloud,” Knight said. “It forced IT departments to evolve.”

The Capital BlueCross IT department became more conscious of storing data on the “right” and most cost-effective systems to deliver whatever performance level the business requires, Knight said. The AWS alternative gives users a myriad of choices, including block, file and scale-out object storage, fast flash and slower spinning disk, and Glacier archives at differing price points.

“We think more in the context of business problems now, as opposed to just data and numbers,” Knight said. “How many gigabytes isn’t relevant anymore.”

Capital BlueCross’ limited public cloud footprint consists of about 100 TB of a scale-out backup repository in Microsoft’s Azure Blob Storage and the data its software-as-a-service (SaaS) applications generate. Knight said the insurer “will never be in one cloud,” and he expects to have workloads in AWS someday. Knight said he has noticed his on-premises storage vendors have expanded their cloud options. Capital BlueCross’ main supplier, IBM, even runs its own public cloud, although Capital BlueCross doesn’t use it.

Expansion of consumption-based pricing

Facing declining revenue, major providers such as Dell EMC, Hewlett Packard Enterprise and NetApp introduced AWS-like consumption-based pricing to give customers the choice of paying only for the storage they use. The traditional capital-expense model often leaves companies overbuying storage as they try to project their capacity needs over a three- to five-year window.

While the mainstream vendors pick up AWS-like options, Amazon continues to bolster its storage portfolio with enterprise capabilities found in on-premises block-based SAN and file-based NAS systems. AWS added its Elastic Block Store (EBS) in August 2008 for applications running on Elastic Cloud Compute (EC2) instances. File storage took longer, with the Amazon Elastic File System (EFS) arriving in 2016 and FSx for Lustre and Windows File Server in 2018.

AWS ventured into on-premises hardware in 2015 with a Snowball appliance to help businesses ship data to the cloud. In late 2019, Amazon released Outposts hardware that gives customers storage, compute and database resources to build on-premises applications using the same AWS tools and services that are available in the cloud.

Amazon S3 API impact

Amid the ever-expanding breadth of offerings, it’s hard to envision any AWS storage option approaching the popularity and influence of the first one. Simple Storage Service, better known as S3, stores objects on cheap, commodity servers that can scale out in seemingly limitless fashion. Amazon did not invent object storage, but its S3 application programming interface (API) has become the de facto industry standard.

“It forced IT to look at redesigning their applications,” Gartner research vice president Julia Palmer said of S3.  

Amazon storage timeline
AWS storage has grown from the object-based Simple Storage Service (S3) to include block, file, archival and on-premises options.

Palmer said when she worked in engineering at GoDaddy, the Internet domain registrar and service provider designed its own object storage to talk to various APIs. But the team members gradually realized they would need to focus on the S3 API that everyone else was going to use, Palmer said.

Every important storage vendor now supports the S3 API to facilitate access to object storage. Palmer said that, although object systems haven’t achieved the level of success on premises that they have in the cloud, the idea that storage can be flexible, infinitely scalable and less costly by running on commodity hardware has had a dramatic impact on the industry.

“Before, it was file or block,” she said. “And that was it.”

Object storage use cases expand

Because of higher performance storage emerging in the cloud and on premises, object storage is expanding beyond the original backup and archiving use cases to workloads such as big data analytics. For instance, Pure Storage and NetApp sell all-flash hardware for object storage, and object software pioneer SwiftStack improves throughput through parallel I/O.

Enrico Signoretti, a senior data storage analyst at GigaOm, said he fields calls every day from IT pros who want to use object storage for more use cases.

“Everyone is working to make object storage faster,” Signoretti said. “It’s growing like crazy.”

Major League Baseball (MLB) is trying to get its developers to move away from files and write to S3 buckets, as it plans a 10- to 20-PB open source Ceph object storage cluster. Truman Boyes, MLB’s SVP of infrastructure, said developers have been working with files for so long that it will take time to convince them that the object approach could be easier. 

“From an application designer’s perspective, they don’t have to think about how to have resilient storage. They don’t have to worry if they’ve copied it to the right number of places and built in all these mechanisms to ensure data integrity,” Boyes said. “It just happens. You talk to an API, and the API figures it out for you.”

Ken Rothenberger, an enterprise architect at General Mills, said Amazon S3 object storage significantly influenced the way he thinks about data durability. Rothenberger said the business often mandates zero data loss, and traditional block storage requires the IT department to keep backups and multiple copies of data.

AWS storage challengers

By contrast, AWS S3 and Glacier stripe data across at least three facilities located 10 km to 60 km away from each other and provide 99.999999999% durability. Amazon technology VP Bill Vass said the 10 km distance is to withstand an F5 tornado that is 5 km wide, and the 60 km is for speed-of-light latency. “Certainly none of the other cloud providers do it by default,” Vass said.

Startup Wasabi Technologies claims to provide 99.999999999% durability through a different technology approach, and takes aim at Amazon S3 Standard on price and performance. Wasabi eliminated data egress fees to target one of the primary complaints of AWS storage customers.

Vass countered that egress charges pay for the networking gear that enables access at 8.8 terabits per second on S3. He also noted that AWS frequently lowers storage prices, just as it does across the board for all services.

“You don’t usually get that aggressive price reduction from on-prem [options], along with the 11 nines durability automatically spread across three places,” Vass said.

Amazon’s shortcomings in block and file storage have given rise to a new market of “cloud-adjacent” storage providers, according to Marc Staimer, president of Dragon Slayer Consulting. Staimer said Dell EMC, HPE, Infinidat and others put their storage into facilities located within close proximity of AWS compute nodes. They aim to provide a “faster, more scalable, more secure storage” alternative to AWS, Staimer said.

But the most serious cloud challengers for AWS storage remain Azure and Google. AWS also faces on-premises challenges from traditional vendors that provide the infrastructure for data centers where many enterprises continue to store most of their data.

Cloud vs. on-premises costs

Jevin Jensen, VP of global infrastructure at Mohawk Industries, said he tracks the major cloud providers’ prices and keeps an open mind. But at this point in time, he finds that his company is able to keep its “fully loaded” costs at least 20% lower by running its SAP, payroll, warehouse management and other business-critical applications in-house, with on-premises storage.

Jensen said the cost delta between the cloud and Mohawk’s on-premises data center was initially about 50%, leaving him to wonder, “Why are we even thinking about cloud?” He said the margin dropped to 20% or 30% as AWS and the other cloud providers reduced their prices.

Like many enterprises, Mohawk uses the public cloud for SaaS applications and credit card processing. The Georgia-based global flooring manufacturer also has Azure for e-commerce. Jensen said the mere prospect of moving more workloads and data off-site enables Mohawk to secure better discounts from its infrastructure suppliers.

“They know we have stuff in Azure,” Jensen said. “They know we can easily go to Amazon.”

Go to Original Article
Author:

For Sale – AOC AGON AG251FZ 240Hz 24.5″ LED FHD (1920×1080) Freesync 1ms Gaming monitor

Don’t have time for competitive gaming anymore. Purchased brand new from Amazon 5 months ago, still have original packaging and power adaptor. The DisplayPort cable which came with it was faulty so this includes the one I bought.

Club3D CAC-2067 DisplayPort to DisplayPort 1.4/HBR3 Cable DP 1.4 8K 60Hz 1m/3.28ft, Black

I’ll be moving from Durham to Staffordshire soon so collection is available from Durham up to the 13th of December, after that date from Staffordshire.

Go to Original Article
Author:

For Sale – AOC AGON AG251FZ 240Hz 24.5″ LED FHD (1920×1080) Freesync 1ms Gaming monitor

Don’t have time for competitive gaming anymore. Purchased brand new from Amazon 5 months ago, still have original packaging and power adaptor. The DisplayPort cable which came with it was faulty so this includes the one I bought.

Club3D CAC-2067 DisplayPort to DisplayPort 1.4/HBR3 Cable DP 1.4 8K 60Hz 1m/3.28ft, Black

I’ll be moving from Durham to Staffordshire soon so collection is available from Durham up to the 13th of December, after that date from Staffordshire.

Go to Original Article
Author:

4 SD-WAN vendors integrate with AWS Transit Gateway

Several software-defined WAN vendors have announced integration with Amazon Web Services’ Transit Gateway. For SD-WAN users, the integrations promise simplified management of policies governing connectivity among private data centers, branch offices and AWS virtual networks.

Stitching together workloads across cloud and corporate networks is complex and challenging. AWS tackles the problem by making AWS Transit Gateway the central router of all traffic emanating from connected networks.

Cisco, Citrix Systems, Silver Peak and Aruba, a Hewlett Packard Enterprise Company, launched integrations with the gateway this week. The announcements came after AWS unveiled the AWS Transit Gateway at its re:Invent conference in Las Vegas.

SD-WAN vendors lining up quickly to support the latest AWS integration tool didn’t surprise analysts. “The ease and speed of integration with leading IaaS platforms are key competitive issues for SD-WAN for 2020,” said Lee Doyle, the principal analyst for Doyle Research.

By acting as the network hub, Transit Gateway reduces operational costs by simplifying network management, according to AWS. Before the new service, companies had to make individual connections between networks outside of AWS and those serving applications inside the cloud provider.

The potential benefits of Transit Gateway made connecting to it a must-have for SD-WAN suppliers. However, tech buyers should pay close attention to how each vendor configures its integration.

“SD-WAN vendors have different ways of doing things, and that leads to some solutions being better than others,” Doyle said.

What the 4 vendors are offering

Cisco said its integration would let IT teams use the company’s vManage SD-WAN controller to administer connectivity from branch offices to AWS. As a result, engineers will be able to apply network segmentation and data security policies universally through the Transit Gateway.

Aruba will let customers monitor and manage connectivity either through the Transit Gateway or Aruba Central. The latter is a cloud-based console used to control an Aruba-powered wireless LAN.

Silver Peak is providing integration between the Unity EdgeConnect SD-WAN platform and Transit Gateway. The link will make the latter the central control point for connectivity.

Finally, Citrix’s Transit Gateway integration would let its SD-WAN orchestration service connect branch offices and data centers to AWS. The connections will be particularly helpful to organizations running Citrix’s virtual desktops and associated apps on AWS.

Go to Original Article
Author: