Tag Archives: change

For Sale – MacBook Pro 16” 2.6ghz 6 core i7, Radeon 5500, 32GB Ram, 512GB SSD

Hi Everyone, Looking to sell my iMac (if I can get the right offer) as I am contemplating a change my setup, partly driven by what’s going on at the moment and having to work from home (I work on Windows). It was bought new directly from Apple in January 2018 and has been very well looked…

Go to Original Article
Author:

For Sale – MacBook Pro 16” 2.6ghz 6 core i7, Radeon 5500, 32GB Ram, 512GB SSD

Hi Everyone, Looking to sell my iMac (if I can get the right offer) as I am contemplating a change my setup, partly driven by what’s going on at the moment and having to work from home (I work on Windows). It was bought new directly from Apple in January 2018 and has been very well looked…

Go to Original Article
Author:

For Sale – iMac (Retina 5K, 27-inch, 2017 – VESA mount edition)

Hi Everyone,

Looking to sell my iMac (if I can get the right offer) as I am contemplating a change my setup, partly driven by what’s going on at the moment and having to work from home (I work on Windows).

It was bought new directly from Apple in January 2018 and has been very well looked after. The specs are as follows:

5K 27-inch display
VESA mount edition (please note that this doesn’t come with the L-shaped stand – but will be compatible with VESA monitor arms, no adaptor required)
4.2GHz Quad i7
500GB SSD
24GB RAM (self-upgraded from 8GB factory install with 2 x 8GB DDR4)

Please note I won’t be including the Windows licence under the Bootcamp partition in the pictures. I’m not selling the trackpad, but I’ll include the Wireless Magic Keyboard 2 (UK with Numpad) for an additional £80 (not selling that on its own). The AppleCare has also expired now, but I’ve got the original box

Thanks,
Ronko Busta
Screenshot 2020-03-18 at 23.59.00.pngScreenshot 2020-03-19 at 00.02.21.pngScreenshot 2020-03-19 at 00.02.29.pngScreenshot 2020-03-19 at 00.02.35.png

Go to Original Article
Author:

For Sale – ROG Rapture GT-AX11000 router

Selling this gaming router due to change of circumstances.. Was brought in July 2019 from Very.. Opened up over the weekend to set it up forgotten I had it to be honest was due to have my front room Extended why the delay is setting it up.. but loft is now getting done first so item has not even been turned on yet took pics and put back in box. Had a little. Accident with one the anttanas must of been lose wire has come out and little clip will need gluing. Price adjusted for the antanna

Go to Original Article
Author:

Google Kubernetes Engine price change sparks discontent

An upcoming price change to Google Kubernetes Engine isn’t sitting well with some users of the managed container service, but analysts said Google’s move is well within reason.

As of June 6, Google will charge customers a cluster management fee of $0.10 per hour, regardless of the cluster’s size or topology. Each customer billing account will receive one free zonal cluster, and the new management fee doesn’t apply to clusters run as part of Anthos, Google’s cross-platform container orchestration service.

Along with the management fee, however, Google is also introducing a service-level agreement (SLA) for Google Kubernetes Engine (GKE). It promises 99.95% availability for regional clusters and 99.5% on zonal clusters, assuming they use a version from Google’s stable release channel, according to the price change announcement.

Google’s decision did not sit well with some users, who voiced complaints on social media.

Gary ChenGary Chen

Container service pricing could remain in flux

Others disagreed. The planned fee for Google Kubernetes Engine is reasonable, said Gary Chen, an analyst at IDC. “The fact is that Kubernetes control plane management is getting more complex and Google is constantly improving it, so there is a lot of value-add there,” he said. “Plus, as more critical workloads get deployed on containers, service levels become important and that will take some effort and investment, so it’s not unreasonable to ask for a fee for enterprise-level SLAs.”

A longer-term solution for Google could be to offer a lower-cost or free tier for those who don’t need all the features or the SLA, Chen added. “I think we’ll definitely see that in cloud container pricing in the future,” he said. “More tiers, feature add-ons, etc., to satisfy all segments of the market.”

Google previously had a management fee of $0.15 per hour for large clusters but dropped it in November 2017. The price addition coming in June will bring GKE into parity with Amazon Elastic Kubernetes Service; AWS cut the cluster management fee for EKS to $0.10 per hour in January, down from $0.20 per hour.

Kubernetes ecosystem
Managed services such as GKE fit into a continuum of technologies for managing containers

Although measured in pennies per hour, the cluster management fees amount to about $72 a month per cluster, a sum that can add up fast in larger implementations. The question Google Cloud customers must weigh now is whether the fee is worth it compared to other options.

Microsoft Azure Kubernetes Service is one, as it doesn’t currently carry any cluster management fees. But customers would have to do a close comparison of what Azure charges for the compute resources supporting managed containers, as opposed to Google, AWS and other providers.

Another alternative would be to self-manage clusters, but that would require analysis of whether doing so would be desirable in terms of staff time and training.

Above all, Google would undoubtedly like more adoption of Anthos, which became generally available in April 2019. The platform encompasses a software stack much broader than GKE and is priced accordingly. Anthos is the company’s primary focus in its bid to gain market share against Azure and AWS and represents Google Cloud CEO Thomas Kurian’s intent to get more large enterprise customers aboard.

“The cloud wars are intense and revenue matters,” said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. The cluster management pricing could be viewed as some “gentle pressure” on customers to adopt Anthos, he added.

Go to Original Article
Author:

Q&A: SwiftStack object storage zones in on AI, ML, analytics

SwiftStack founder Joe Arnold said the company’s recent layoffs reflected a change in its sales focus but not in its core object storage technology.

San Francisco-based SwiftStack attributed the layoffs to a switch in use cases from classic backup and archiving to newer artificial intelligence, machine learning and analytics. Arnold said the staffing changes had no impact on the engineering and support team, and the core product will continue to focus on modern applications and complex workflows that need to store lots of data.

“I’ve always thought of object storage as a data as a service platform more than anything else,” said Arnold, SwiftStack’s original CEO and current president and chief product officer.

TechTarget caught up with Arnold to talk about customer trends and the ways SwiftStack is responding in an increasingly cloud-minded IT world. Arnold unveiled product news about SwiftStack adding Microsoft Azure as a target for its 1space technology, which facilitates a single namespace between object storage locations for cloud platform compatibility. The company already supported Amazon S3 and Google.

SwiftStack’s storage software, which is based on open source OpenStack Swift, runs on commodity hardware on premises, but the 1space technology can run in the public cloud to facilitate access to public and private cloud data. Nearly all of SwiftStack’s estimated 125 customers have some public cloud footprint, according to Arnold.

Arnold also revealed a new distributed, multi-region erasure code option that can enable customers to reduce their storage footprint.

What caused SwiftStack to change its sales approach?

Joe Arnold, founder and president, SwiftStackJoe Arnold

Joe Arnold: At SwiftStack, we’ve always been focused on applications that are in the data path and mission critical to our customers. Applications need to generate more value from the data. People are distributing data across multiple locations, between the public cloud and edge data locations. That’s what we’ve been really good at. So, the change of focus with the go-to-market path has been to double down on those efforts rather than what we had been doing.

How would you compare your vision of object storage with what you see as the conventional view of object storage?

Arnold: The conventional view of object storage is that it’s something to put in the corner. It’s only for cold data that I’m not going to access. But, that’s not the reality of how I was brought up through object storage. My first exposure to object storage was building platforms versus Amazon Web Services when they introduced S3. We immediately began using that as the place to store data for applications that were directly in the data path.

Didn’t object storage tend to address backup and archive use cases because it wasn’t fast enough for primary workloads?

Arnold: I wouldn’t say that. Our customers are using their data for their applications. That’s usually a large data set that can’t be stored in traditional ways. Yes, we do have customers that use [SwiftStack] for purely cold archive and purely backup. In fact, we have features and capabilities to enhance some of the cold storage capabilities of the product. What we’ve changed is our go-to-market approach, not the core product.

So, for example, we’re adding a distributed, multi-region erasure code storage policy that customers can use across three data centers for colder data. It allows the entire segments of data — data bits and parity bits — to be distributed across multiple sites and, to retrieve data, only two of the data centers need to be online.

How does the new erasure code option differ from what you’ve offered in the past?

Arnold: Before, we offered the ability to use erasure code where each site could fully reconstruct the data. A data center could be offline, and you could still reconstruct fully. Now, with this new approach, you can store data more economically, but it requires two of three data centers to be online. It’s just another level of efficiency in our storage tier. Customers can distribute data across more data centers without using as much raw storage footprint and still have high levels of durability and availability. Since we’re building out storage workflows that tier up and down across different storage tiers, they can utilize this one for their most cold data storage policies.

Does the new erasure coding target users who strictly do archiving, or will it also benefit those doing AI and analytics?

Arnold: They absolutely need it. Data goes back and forth between their core data center, the edge and the public cloud in workflows such as autonomous vehicles, personalized medicine, telco and connected city. People need to manage data between different tiers as they’re evolving from more traditional-based applications into more modern, cloud-native type applications. And they need this ultra-cold tier.

How similar is this cold tier to Amazon Glacier?

Arnold: From a cost point of view, it will be similar. From a performance point of view, it’s much better. From a data availability point of view, it’s much better. It costs a lot of money to egress data out of something like AWS Glacier.

How important is flash technology in getting performance out of object storage?

Arnold: If the applications care about concurrency and throughput, particularly when it comes to a large data set, then a disk-based solution is going to satisfy their needs. Because the SwiftStack product’s able to distribute requests across lots of disks at the same time, they’re able to sustain the concurrency and throughput. Sure, they could go deploy a flash solution, but that’s going to be extremely expensive to get the same amount of storage footprint. We’re able to get single storage systems that can deliver a hundred gigabytes a second aggregate read-write throughput rates. That’s nearly a terabit of throughput across the cluster. That’s all with disk-based storage.

What do you think of vendors such as Pure Storage offering flash-based options with cheaper quad-level cell (QLC) flash that compares more favorably price-wise to disk?

Arnold: QLC flash is great, too. We support that as well in our product. We’re not dogmatic about using or not using flash. We’re trying to solve large-footprint problems of our customers. We do have customers using flash with a SwiftStack environment today. But they’re using it because they want reduced latencies across a smaller storage footprint.

How do you see demand for AWS, Microsoft and Google based on customer feedback?

Arnold: People want options and flexibility. I think that’s the reason why Kubernetes has become popular, because that enables flexibility and choice between on premises and the public cloud, and then between public clouds. Our customers were asking for the same. We have a number of customers focused on Microsoft Azure for their public cloud usage. And they want to be able to manage SwiftStack data between their on-premises environments with SwiftStack and the public cloud. So, we added the 1space functionality to include Azure.

What tends to motivate your customers to use the public cloud?  

Arnold: Some use it because they want to have disaster recovery ready to go up in the public cloud. We will mirror a set of data and use that as a second data center if they don’t already have one. We have customers that collect data from partners or devices out in the field. The data lands in the public cloud, and they want to move it to their on-premises environment. The other example would be customers that want to use the public cloud for compute resources where they need access to their data, but they don’t want to necessarily have long-term data storage in the public clouds. They want the flexibility of which public cloud they’re going to use for their computation and application runtime, and we can provide them connections to the storage environment for those use cases.

Do you have customers who have second thoughts about their cloud decisions due to egress and other costs?

Arnold: Of course. That happens in all directions. Sometimes you’re helping people move more stuff into the public cloud. In some situations, you’re pulling down data, or maybe it’s going in between clouds. They may have had a storage footprint in the public cloud that was feeding to some end users or some computation process. The egress charges were getting too high. The footprint was getting too high. And that costs them a tremendous amount month over month. That’s where we have the conversation. But it still doesn’t mean that they need to evacuate entirely from the public cloud. In fact, many customers will keep the storage on premises and use the public cloud for what it’s good at — more burstable computation points.

What’s your take on public cloud providers coming out with various on-premises options, such as Amazon Outposts and Azure Stack?

Arnold: It’s the trend of ‘everything as a service.’ I think what customers want is a managed experience. The number of operators who are able to manage these big environments is becoming harder and harder to come across. So, it’s a natural for those companies to offer a managed on-premises product. We feel the same way. We think that managing large sets of infrastructure needs to be highly automated, and we’ve built our product to make that as simple as possible. And we offer a product to do storage as a service on premises for customers who want us to do remote operations of their SwiftStack environments.

How has Kubernetes- and container-based development affected the way you design your product?

Arnold: Hugely. It impacts how applications are being developed. Kubernetes gives an organization the flexibility to deploy an application in different environments, whether that’s core data centers, bursting out into the public cloud or crafting applications out to the edge. At SwiftStack, we need to make the data just as portable as the containerized application is. That’s why we developed 1space. A huge number of our customers are using Kubernetes. That just naturally lends itself to the use of something like 1space to give them the portability they need for access to their data.

What gaps do you need to fill to more fully address what customers want to do?

Arnold: One is further flushing out ‘everything as a service.’ We just launched a service around that. As more customers adopt that, we’re going to have more work to do, as the deployments become more diverse across not just core data centers, but also edge data centers.

I see the convergence of file and object workflows and furthering 1space with our edge-to-core-to-cloud workflows. Particularly in the world of high-performance data analytics, we’re seeing the need for object — but it’s a world that is dominated by file-based applications. Data gets pumped into the system by robots, and object storage is awesome for that because it’s easy and you get lots of concurrency and lots of parallelism. However, you see humans building out algorithms and doing research and development work. They’re using file systems to do much of their programming, particularly in this high performance data analytics world. So, managing the convergence between file and object is an important thing to do to solve those use cases.

Go to Original Article
Author:

AIOps meaning to expand throughout DevOps chain

It seems that every year there’s a new record for the pace of change in IT, from the move from mainframe to client/server computing, to embracing the web and interorganizational data movements. The current moves that affect organizations are fundamental, and IT operations had better pay attention.

Cloud providers are taking over ownership of the IT platform from organizations. Organizations are moving to a multi-cloud hybrid platform to gain flexibility and the ability to quickly respond to market needs. Applications have started to transition from monolithic entities to composite architectures built on the fly in real time from collections of functional services. DevOps has affected how IT organizations write, test and deliver code, with continuous development and delivery relatively mainstream approaches.

These fundamental changes mean that IT operations managers have to approach the application environment in a new way. Infrastructure health dashboards don’t meet their needs. Without deep contextual knowledge of how the platform looks at an instant, and what that means for performance, administrators will struggle to address issues raised.

Enter AIOps platforms

AIOps means IT teams use artificial intelligence to monitor the operational environment and rapidly and automatically remediate any problems that arise — and, more to the point, prevent any issues in the first place.

True AIOps-based management is not easy to accomplish. It’s nearly impossible to model an environment that continuously changes and then also plot all the dependencies between hardware, virtual systems, functional services and composite apps.

AIOPs use cases

However, AIOps does meet a need. It is, as yet, a nascent approach. Many AIOps systems do not really use that much artificial intelligence; many instead rely on advanced rules and policy engines to automatically remediate commonly known and expected issues. AIOps vendors collect information on operations issues from across their respective customer bases to make the tools more useful.

Today’s prospective AIOps buyers must beware of portfolio repackaging — AIOps on the product branding doesn’t mean they use true artificial intelligence. Question the vendor carefully about how its system learns on the go, deals with unexpected changes and manages idempotency. 2020 might be the year of AIOps’ rise, but it might also be littered with the corpses of AIOps vendors that get things wrong.

AIOps’ path for the future

As we move through 2020 and beyond, AIOps’ meaning will evolve. Tools will better adopt learning systems to model the whole environment and will start to use advanced methods to bring idempotency — the capability to define an end result and then ensure that it is achieved — to the fore. AIOps tools must be able to either take input from the operations team or from the platform itself and create the scripts, VMs, containers, provisioning templates and other details to meet the applications’ requirements. The system must monitor the end result from these hosting decisions and ensure that not only is it as-expected, but that it remains so, no matter how the underlying platform changes. Over time, AIOps tools should extend so that business stakeholders also have insights into the operations environment.

Such capabilities will mean that AIOps platforms move from just operations environment tool kits to part and parcel of the overall BizDevOps workflows. AIOps will mean an overarching orchestration system for the application hosting environment, a platform that manages all updates and patches, and provides feedback loops through the upstream environment.

The new generation of AIOps tools and platforms will focus on how to avoid manual intervention in the operations environment. Indeed, manual interventions are likely to be where AIOps could fail. For example, an administrator who puts wrong information into the flow or works outside of the AIOps system to make any configuration changes could start a firestorm of problems. When the AIOps system tries to fix them, it will find that it does not have the required data available to effectively model the change the administrator has made.

2020 will see AIOps’ first baby steps to becoming a major tool for the systems administrator. Those who embrace the idea of AIOps must ensure that they have the right mindset: AIOps has to be the center of everything. Only in extreme circumstances should any action be taken outside of the AIOps environment.

The operations team must reach out to the development teams to see how their feeds can integrate into an AIOps platform. If DevOps tools vendors realize AIOps’ benefits, they might provide direct integrations for downstream workflows or include AIOps capabilities into their own platform. This trend could expand the meaning of AIOps to include business capabilities and security as well.

As organizations move to highly complex, highly dynamic platforms, any dependency on a person’s manual oversight dooms the deployment to failure. Simple automation will not be a workable way forward — artificial intelligence is a must.

Go to Original Article
Author:

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article
Author:

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article
Author:

For Sale – 2015 iMac 5K – 24gb Ram, i5, 1tb, Boxed.

Hi all,

Due to a job change, resulting in a work pc, my iMac 5k is surplus to requirements. It’s in great condition, no issues with the screen and will come wiped ready for the new user to setup (as new from Apple). I have the magic mouse and keyboard to go with it (the ones with a lightning charger), along with the original box.
Specs are as follows;
3.2ghz Core i5,
24gb DDR3,
1tb Hard drive,
Radeon R9 M380.

Pictures below show the rest – please note I was using an external Samsung T5 SSD to boot from, which I have subsequently removed. I could include this in the sale if requested.

Any questions please ask.

Collection only from Clapham Junction, London.

Go to Original Article
Author: