Tag Archives: recent

For Sale – 2 x WD Red 2TB; 1 x Seagate 500GB

Hello,

Selling 3 x HDDs due to recent upgrade to bigger drives for NAS use.

1 x WD Red – 2 TB – Purchased Oct 2014 from Amazon. No Warranty, but no performance issues – £27

1 x WD Red – 2 TB – Purchased Oct 2014 from Amazon. No Warranty, but no performance issues – £27

1 x Seagate Barracuda 7200 – 500 GB – No warranty, but no performance issues – £12.50

Collection available from SE23 or will post at buyer’s cost.

Thanks,

Alex

[Edit 14/03 – price dropped. 17/03 – 1 drive sold]

Go to Original Article
Author:

Community and Connection to Drive Change

Reflections on International Women’s Day and Women’s History Month

In recent weeks, I have had several individuals share with me their admiration for the amount of time I spend listening to, advocating for and simply being there for women. Of course I was humbled by what felt like a compliment, but hearing this gave me pause. Why did these individuals see my actions as deserving of admiration as opposed to a core way of how we show up for each other in the workplace, the industry and our lives in general? What path led me to this way of being, how might I expand my impact and how might I encourage others to take a more active role?

This way of being has been part of who I am for my entire working life. When I joined Microsoft full time in 1998, my first manager was a role model for me. Laurie Litwack spent time getting to know me personally as well as to understand my passion and hopes and what unique perspective I brought. She thoughtfully created my first assignment to both leverage my skills and challenge me. Laurie showed me not only what it meant to bring your authentic self to work but also how it felt to be supported. Under her leadership I not only grew in the technical aspects of my role, she also nurtured my appreciation for people. Looking back, this experience was unique, especially for that era in engineering where there were fewer women and even fewer women managers. It shaped my values as a leader and my view on how you best engage people and support their development. It showed me the importance of being present.

Early into my career the VP of our engineering organization, Bill Vegthe, brought a group of women employees together to better understand our experiences in the organization. He genuinely wanted to learn from us what the organization could be doing better to support our growth and satisfaction. At the time, the number of women in the organization was low and this forum was the first opportunity many of us had to meet and spend time with each other. The most valuable thing we learned from the experience was the personal support and enjoyment that came from simply making time for each other. The isolation we each felt melted away when we got to spend time with others like us: creating connections, sharing experiences, learning from each other. We grew more collectively than we ever would have individually, and I personally benefited from both the friendship and wisdom of many of the women in this community: Terrell Cox, Jimin Li, Anna Hester, Farzana Rahman, Deb MacFadden, Molly Brown, Linda Apsley, Betsy Speare. This was true many years ago when this community was created and holds true today even as this community has scaled from a handful of women to thousands of women across our Cloud + AI Division who make up this Women’s Leadership Community (WLC) under sponsorship from leaders such as Bob Muglia, Bill Laing, Brad Anderson and currently Scott Guthrie.

As I grew in my career, the importance of intentionally building connections with other women only became more clear. In the early 2010s as I joined the technical executive community, I looked around and felt a similar experience to my early career days. There were very few technical executives who were women, and we were spread across the organization, meaning we rarely had the opportunity to interact and in some cases had never met! It was out of desire to bring the WLC experience to this group that our Life Without Lines Community of technical women executives across Microsoft grew, based on the founding work of Michele Freed, Lili Cheng, Roz Ho, Rebecca Norlander. This group represents cross-company leadership and as the connections deepened, so did the impact on each other in terms of peer mentoring, career sponsorship and engineering and product collaboration.

Together we are more powerful than we are individually, amplifying each other’s voices.       

Although the concept of community might seem simple and obvious in the ongoing conversations about inclusion, the key in my experience is how the connections in these communities were built. This isn’t just about networking for the sake of networking; we come together with a focus on being generous with our time and our experiences, challenging each other and our organization to address issues in a new way, and with the space to be authentic within our own community by not feeling like we needed to be a monolith in our perspectives or priorities. We advocate for one another, we leverage our networks, we create space and we amplify voices of others. This community names the challenges these women face, names the hopes they have for themselves and future women in our industry, and names what is most important to our enjoyment of our work. My job, and the job of others leaders, is to then listen to these voices leveraging the insights to advocate for what is needed in the organization, and drive systemic changes that will create the best-lived experience for all women at Microsoft and in the industry. 

I have found that members of the community want to be heard, if you are willing to be present, willing to bring your authentic self and willing to take action on what you learn. I’m reflecting on this, in particular, as I think about International Women’s Day (IWD). From its beginnings in the early 1900s through to present day, IWD strives to recognize the need for active participation, equality and development of women and acknowledge the contribution of women globally.

This year I am reflecting on the need to ensure that our communities of women accurately represent the diverse range of perspectives and experiences of employees and customers. Making sure that even in a community about including others, we are not unintentionally excluding certain groups of women who may not have the same experiences or priorities, or privileges as others. It is a chance to reflect on how I can expand my impact. I challenge all of us to take this time to recognize those who are role models for us and those voices who may not be heard and determine what role each of us can play in achieving this goal for everyone.

Go to Original Article
Author: Microsoft News Center

NTFS vs. ReFS – How to Decide Which to Use

By now, you’ve likely heard of Microsoft’s relatively recent file system “ReFS”. Introduced with Windows Server 2012, it seeks to exceed NTFS in stability and scalability. Since we typically store the VHDXs for multiple virtual machines in the same volume, it seems as though it pairs well with ReFS. Unfortunately, it did not… in the beginning. Microsoft has continued to improve ReFS in the intervening years. It has gained several features that distanced it from NTFS. With its maturation, should you start using it for Hyper-V? You have much to consider before making that determination.

What is ReFS?

The moniker “ReFS” means “resilient file system”. It includes built-in features to aid against data corruption. Microsoft’s docs site provides a detailed explanation of ReFS and its features. A brief recap:

  • Integrity streams: ReFS uses checksums to check for file corruption.
  • Automatic repair: When ReFS detects problems in a file, it will automatically enact corrective action.
  • Performance improvements: In a few particular conditions, ReFS provides performance benefits over NTFS.
  • Very large volume and file support: ReFS’s upper limits exceed NTFS’s without incurring the same performance hits.
  • Mirror-accelerated parity: Mirror-accelerated parity uses a lot of raw storage space, but it’s very fast and very resilient.
  • Integration with Storage Spaces: Many of ReFS’s features only work to their fullest in conjunction with Storage Spaces.

Before you get excited about some of the earlier points, I need to emphasize one thing: except for capacity limits, ReFS requires Storage Spaces in order to do its best work.

ReFS Benefits for Hyper-V

ReFS has features that accelerate some virtual machine activities.

  • Block cloning: By my reading, block cloning is essentially a form of de-duplication. But, it doesn’t operate as a file system filter or scanner. It doesn’t passively wait for arbitrary data writes or periodically scan the file system for duplicates. Something must actively invoke it against a specific file. Microsoft specifically indicates that it can greatly speed checkpoint merges.
  • Sparse VDL (valid data length): All file systems record the amount of space allocated to a file. ReFS uses VDL to indicate how much of that file has data. So, when you instruct Hyper-V to create a new fixed VHDX on ReFS, it can create the entire file in about the same amount of time as creating a dynamically-expanding VHDX. It will similarly benefit expansion operations on dynamically-expanding VHDXs.

Take a little bit of time to go over these features. Think through their total applications.

ReFS vs. NTFS for Hyper-V: Technical Comparison

With the general explanation out of the way, now you can make a better assessment of the differences. First, check the comparison tables on Microsoft’s ReFS overview page. For typical Hyper-V deployments, most of the differences mean very little. For instance, you probably don’t need quotas on your Hyper-V storage locations. Let’s make a table of our own, scoped more appropriately for Hyper-V:

  • ReFS wins: Really large storage locations and really large VHDXs
  • ReFS wins: Environments with excessively high incidences of created, checkpointed, or merged VHDXs
  • ReFS wins: Storage Space and Storage Spaces Direct deployments
  • NTFS wins: Single-volume deployments
  • NTFS wins (potentially): Mixed-purpose deployments

I think most of these things speak for themselves. The last two probably need a bit more explanation.

Single-Volume Deployments Require NTFS

In this context, I intend “single-volume deployment” to mean installations where you have Hyper-V (including its management operating system) and all VMs on the same volume. You cannot format a boot volume with ReFS, nor can you place a page file on ReFS. Such an installation also does not allow for Storage Spaces or Storage Spaces Direct, so it would miss out on most of ReFS’s capabilities anyway.

Mixed-Purpose Deployments Might Require NTFS

Some of us have the luck to deploy nothing but virtual machines on dedicated storage locations. Not everyone has that. If your Hyper-V storage volume also hosts files for other purposes, you might need to continue with NTFS. Go over the last table near the bottom of the overview page. It shows the properties that you can only find in NTFS. For standard file sharing scenarios, you lose quotas. You may have legacy applications that require NTFS’s extended properties, or short names. In these situations, only NTFS will do.

Note: If you have any alternative, do not use the same host to run non-Hyper-V roles alongside Hyper-V. Microsoft does not support mixing. Similarly, separate Hyper-V VMs onto volumes apart from volumes that hold other file types.

Unexpected ReFS Behavior

The official content goes to some lengths to describe the benefits of ReFS’s integrity streams. It uses checksums to detect file corruption. If it finds problems, it engages in corrective action. On a Storage Spaces volume that uses protective schemes, it has an opportunity to fix the problem. It does that with the volume online, providing a seamless experience. But, what happens when ReFS can’t correct the problem? That’s where you need to pay real attention.

On the overview page, the documentation uses exceptionally vague wording: “ReFS removes the corrupt data from the namespace”. The integrity streams page does worse: “If the attempt is unsuccessful, ReFS will return an error.” While researching this article, I was told of a more troubling activity: ReFS deletes files that it deems unfixable. The comment section at the bottom of that page includes a corroborating report. If you follow that comment thread through, you’ll find an entry from a Microsoft program manager that states:

ReFS deletes files in two scenarios:

  1. ReFS detects Metadata corruption AND there is no way to fix it. Meaning ReFS is not on a Storage Spaces redundant volume where it can fix the corrupted copy.
  2. ReFS detects data corruption AND Integrity Stream is enabled AND there is no way to fix it. Meaning if Integrity Stream is not enabled, the file will be accessible whether data is corrupted or not. If ReFS is running on a mirrored volume using Storage Spaces, the corrupted copy will be automatically fixed.

The upshot: If ReFS decides that a VHDX has sustained unrecoverable damage, it will delete it. It will not ask, nor will it give you any opportunity to try to salvage what you can. If ReFS isn’t backed by Storage Spaces’s redundancy, then it has no way to perform a repair. So, from one perspective, that makes ReFS on non-Storage Spaces look like a very high risk approach. But…

Mind Your Backups!

You should not overlook the severity of the previous section. However, you should not let it scare you away, either. I certainly understand that you might prefer a partially readable VHDX to a deleted one. To that end, you could simply disable integrity streams on your VMs’ files. I also have another suggestion.

Do not neglect your backups! If ReFS deletes a file, retrieve it from backup. If a VHDX goes corrupt on NTFS, retrieve it from backup. With ReFS, at least you know that you have a problem. With NTFS, problems can lurk much longer. No matter your configuration, the only thing you can depend on to protect your data is a solid backup solution.

When to Choose NTFS for Hyper-V

You now have enough information to make an informed decision. These conditions indicate a good condition for NTFS:

  • Configurations that do not use Storage Spaces, such as single-disk or manufacturer RAID. This alone does not make an airtight point; please read the “Mind Your Backups!” section above.
  • Single-volume systems (your host only has a C: volume)
  • Mixed-purpose systems (please reconfigure to separate roles)
  • Storage on hosts older than 2016 — ReFS was not as mature on previous versions. This alone is not an airtight point.
  • Your backup application vendor does not support ReFS
  • If you’re uncertain about ReFS

As time goes on, NTFS will lose favorability over ReFS in Hyper-V deployments. But, that does not mean that NTFS has reached its end. ReFS has staggeringly higher limits, but very few systems use more than a fraction of what NTFS can offer. ReFS does have impressive resilience features, but NTFS also has self-healing powers and you have access to RAID technologies to defend against data corruption.

Microsoft will continue to develop ReFS. They may eventually position it as NTFS’s successor. As of today, they have not done so. It doesn’t look like they’ll do it tomorrow, either. Do not feel pressured to move to ReFS ahead of your comfort level.

When to Choose ReFS for Hyper-V

Some situations make ReFS the clear choice for storing Hyper-V data:

  • Storage Spaces (and Storage Spaces Direct) environments
  • Extremely large volumes
  • Extremely large VHDXs

You might make an additional performance-based argument for ReFS in an environment with a very high churn of VHDX files. However, do not overestimate the impact of those performance enhancements. The most striking difference appears when you create fixed VHDXs. For all other operations, you need to upgrade your hardware to achieve meaningful improvement.

However, I do not want to gloss over the benefit of ReFS for very large volumes. If you have storage volume of a few terabytes and VHDXs of even a few hundred gigabytes, then ReFS will rarely beat NTFS significantly. When you start thinking in terms of hundreds of terabytes, NTFS will likely show bottlenecks. If you need to push higher, then ReFS becomes your only choice.

ReFS really shines when you combine it with Storage Spaces Direct. Its ability to automatically perform a non-disruptive online repair is truly impressive. On the one hand, the odds of disruptive data corruption on modern systems constitute a statistical anomaly. On the other, no one that has suffered through such an event really cares how unlikely it was.

ReFS vs NTFS on Hyper-V Guest File Systems

All of the above deals only with Hyper-V’s storage of virtual machines. What about ReFS in guest operating systems?

To answer that question, we need to go back to ReFS’s strengths. So far, we’ve only thought about it in terms of Hyper-V. Guests have their own conditions and needs. Let’s start by reviewing Microsoft’s ReFS overview. Specifically the following:

“Microsoft has developed NTFS specifically for general-purpose use with a wide range of configurations and workloads, however for customers specially requiring the availability, resiliency, and/or scale that ReFS provides, Microsoft supports ReFS for use under the following configurations and scenarios…”

I added emphasis on the part that I want you to consider. The sentence itself makes you think that they’ll go on to list some usages, but they only list one: “backup target”. The other items on their list only talk about the storage configuration. So, we need to dig back into the sentence and pull out those three descriptors to help us decide: “availability”, “resiliency”, and “scale”. You can toss out the first two right away — you should not focus on storage availability and resiliency inside a VM. That leaves us with “scale”. So, really big volumes and really big files. Remember, that means hundreds of terabytes and up.

For a more accurate decision, read through the feature comparisons. If any application that you want to use inside a guest needs features only found on NTFS, use NTFS. Personally, I still use NTFS inside guests almost exclusively. ReFS needs Storage Spaces to do its best work, and Storage Spaces does its best work at the physical layer.

Combining ReFS with NTFS across Hyper-V Host and Guests

Keep in mind that the file system inside a guest has no bearing on the host’s file system, and vice versa. As far as Hyper-V knows, VHDXs attached to virtual machines are nothing other than a bundle of data blocks. You can use any combination that works.

Go to Original Article
Author: Eric Siron

For Sale – Parts Clear Out (Motherboards, Memory, CPUs, GPUs and Case) ***PRICE DROPS***

Due to a recent upgrade, and the need to clear some space in the garage, I’ve got the following up for sale.

ADDED:

Current build bundle – £220.00
Asus Z97 Pro Gamer
LGA 1150 – Intel Core i5 4690K

Motherboards:
MSI Z87 GD65 Used £65.00 £55.00
MSI Z170I ITX Used £95.00 £90.00

DDR3:
8GB Corsair Vengeance Pro – 2133Mhz (2x4GB) Used £35.00
16GB HyperX Savage Red – 2400Mhz (2x8GB) Used £50.00
SOLD to scott178

DDR4:
16GB Corsair Low Profile Black – 2400Mhz (2x8GB) Used £45.00

Intel Processors:
LGA 1150 – Intel Core i5-4670K Used £65.00 £55.00
LGA 1151 – Intel Core i5-6600 Used £90.00 £85.00

AMD Graphics Cards:
XFX AMD R9 390 – 8GB Used £75.00 SOLD to Jeeva

Nvidia Graphics Cards:
MSI – GTX 660Ti 2GB Used £45.00 £35.00
MSI – GTX 570 2GB Used £35.00 £25.00
Pulled from Sale

Mice:
Razer Mamba Elite 2016 Wireless Used £60.00

Cases:
Phanteks Evolve ITX Used £40.00 £35.00 (Collection Only)

Coolers:
Corsair H50 Used £40.00 £35.00
Corsair H80i Used £50.00 £45.00

Most items will be boxed in their original retail or OEM packaging.

I will updating this thread as I discover anything else that I no longer require.

Open to offers.

Price and currency: £845
Delivery: Delivery cost is not included
Payment method: BT/PPG
Location: Oxford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Q&A: SwiftStack object storage zones in on AI, ML, analytics

SwiftStack founder Joe Arnold said the company’s recent layoffs reflected a change in its sales focus but not in its core object storage technology.

San Francisco-based SwiftStack attributed the layoffs to a switch in use cases from classic backup and archiving to newer artificial intelligence, machine learning and analytics. Arnold said the staffing changes had no impact on the engineering and support team, and the core product will continue to focus on modern applications and complex workflows that need to store lots of data.

“I’ve always thought of object storage as a data as a service platform more than anything else,” said Arnold, SwiftStack’s original CEO and current president and chief product officer.

TechTarget caught up with Arnold to talk about customer trends and the ways SwiftStack is responding in an increasingly cloud-minded IT world. Arnold unveiled product news about SwiftStack adding Microsoft Azure as a target for its 1space technology, which facilitates a single namespace between object storage locations for cloud platform compatibility. The company already supported Amazon S3 and Google.

SwiftStack’s storage software, which is based on open source OpenStack Swift, runs on commodity hardware on premises, but the 1space technology can run in the public cloud to facilitate access to public and private cloud data. Nearly all of SwiftStack’s estimated 125 customers have some public cloud footprint, according to Arnold.

Arnold also revealed a new distributed, multi-region erasure code option that can enable customers to reduce their storage footprint.

What caused SwiftStack to change its sales approach?

Joe Arnold, founder and president, SwiftStackJoe Arnold

Joe Arnold: At SwiftStack, we’ve always been focused on applications that are in the data path and mission critical to our customers. Applications need to generate more value from the data. People are distributing data across multiple locations, between the public cloud and edge data locations. That’s what we’ve been really good at. So, the change of focus with the go-to-market path has been to double down on those efforts rather than what we had been doing.

How would you compare your vision of object storage with what you see as the conventional view of object storage?

Arnold: The conventional view of object storage is that it’s something to put in the corner. It’s only for cold data that I’m not going to access. But, that’s not the reality of how I was brought up through object storage. My first exposure to object storage was building platforms versus Amazon Web Services when they introduced S3. We immediately began using that as the place to store data for applications that were directly in the data path.

Didn’t object storage tend to address backup and archive use cases because it wasn’t fast enough for primary workloads?

Arnold: I wouldn’t say that. Our customers are using their data for their applications. That’s usually a large data set that can’t be stored in traditional ways. Yes, we do have customers that use [SwiftStack] for purely cold archive and purely backup. In fact, we have features and capabilities to enhance some of the cold storage capabilities of the product. What we’ve changed is our go-to-market approach, not the core product.

So, for example, we’re adding a distributed, multi-region erasure code storage policy that customers can use across three data centers for colder data. It allows the entire segments of data — data bits and parity bits — to be distributed across multiple sites and, to retrieve data, only two of the data centers need to be online.

How does the new erasure code option differ from what you’ve offered in the past?

Arnold: Before, we offered the ability to use erasure code where each site could fully reconstruct the data. A data center could be offline, and you could still reconstruct fully. Now, with this new approach, you can store data more economically, but it requires two of three data centers to be online. It’s just another level of efficiency in our storage tier. Customers can distribute data across more data centers without using as much raw storage footprint and still have high levels of durability and availability. Since we’re building out storage workflows that tier up and down across different storage tiers, they can utilize this one for their most cold data storage policies.

Does the new erasure coding target users who strictly do archiving, or will it also benefit those doing AI and analytics?

Arnold: They absolutely need it. Data goes back and forth between their core data center, the edge and the public cloud in workflows such as autonomous vehicles, personalized medicine, telco and connected city. People need to manage data between different tiers as they’re evolving from more traditional-based applications into more modern, cloud-native type applications. And they need this ultra-cold tier.

How similar is this cold tier to Amazon Glacier?

Arnold: From a cost point of view, it will be similar. From a performance point of view, it’s much better. From a data availability point of view, it’s much better. It costs a lot of money to egress data out of something like AWS Glacier.

How important is flash technology in getting performance out of object storage?

Arnold: If the applications care about concurrency and throughput, particularly when it comes to a large data set, then a disk-based solution is going to satisfy their needs. Because the SwiftStack product’s able to distribute requests across lots of disks at the same time, they’re able to sustain the concurrency and throughput. Sure, they could go deploy a flash solution, but that’s going to be extremely expensive to get the same amount of storage footprint. We’re able to get single storage systems that can deliver a hundred gigabytes a second aggregate read-write throughput rates. That’s nearly a terabit of throughput across the cluster. That’s all with disk-based storage.

What do you think of vendors such as Pure Storage offering flash-based options with cheaper quad-level cell (QLC) flash that compares more favorably price-wise to disk?

Arnold: QLC flash is great, too. We support that as well in our product. We’re not dogmatic about using or not using flash. We’re trying to solve large-footprint problems of our customers. We do have customers using flash with a SwiftStack environment today. But they’re using it because they want reduced latencies across a smaller storage footprint.

How do you see demand for AWS, Microsoft and Google based on customer feedback?

Arnold: People want options and flexibility. I think that’s the reason why Kubernetes has become popular, because that enables flexibility and choice between on premises and the public cloud, and then between public clouds. Our customers were asking for the same. We have a number of customers focused on Microsoft Azure for their public cloud usage. And they want to be able to manage SwiftStack data between their on-premises environments with SwiftStack and the public cloud. So, we added the 1space functionality to include Azure.

What tends to motivate your customers to use the public cloud?  

Arnold: Some use it because they want to have disaster recovery ready to go up in the public cloud. We will mirror a set of data and use that as a second data center if they don’t already have one. We have customers that collect data from partners or devices out in the field. The data lands in the public cloud, and they want to move it to their on-premises environment. The other example would be customers that want to use the public cloud for compute resources where they need access to their data, but they don’t want to necessarily have long-term data storage in the public clouds. They want the flexibility of which public cloud they’re going to use for their computation and application runtime, and we can provide them connections to the storage environment for those use cases.

Do you have customers who have second thoughts about their cloud decisions due to egress and other costs?

Arnold: Of course. That happens in all directions. Sometimes you’re helping people move more stuff into the public cloud. In some situations, you’re pulling down data, or maybe it’s going in between clouds. They may have had a storage footprint in the public cloud that was feeding to some end users or some computation process. The egress charges were getting too high. The footprint was getting too high. And that costs them a tremendous amount month over month. That’s where we have the conversation. But it still doesn’t mean that they need to evacuate entirely from the public cloud. In fact, many customers will keep the storage on premises and use the public cloud for what it’s good at — more burstable computation points.

What’s your take on public cloud providers coming out with various on-premises options, such as Amazon Outposts and Azure Stack?

Arnold: It’s the trend of ‘everything as a service.’ I think what customers want is a managed experience. The number of operators who are able to manage these big environments is becoming harder and harder to come across. So, it’s a natural for those companies to offer a managed on-premises product. We feel the same way. We think that managing large sets of infrastructure needs to be highly automated, and we’ve built our product to make that as simple as possible. And we offer a product to do storage as a service on premises for customers who want us to do remote operations of their SwiftStack environments.

How has Kubernetes- and container-based development affected the way you design your product?

Arnold: Hugely. It impacts how applications are being developed. Kubernetes gives an organization the flexibility to deploy an application in different environments, whether that’s core data centers, bursting out into the public cloud or crafting applications out to the edge. At SwiftStack, we need to make the data just as portable as the containerized application is. That’s why we developed 1space. A huge number of our customers are using Kubernetes. That just naturally lends itself to the use of something like 1space to give them the portability they need for access to their data.

What gaps do you need to fill to more fully address what customers want to do?

Arnold: One is further flushing out ‘everything as a service.’ We just launched a service around that. As more customers adopt that, we’re going to have more work to do, as the deployments become more diverse across not just core data centers, but also edge data centers.

I see the convergence of file and object workflows and furthering 1space with our edge-to-core-to-cloud workflows. Particularly in the world of high-performance data analytics, we’re seeing the need for object — but it’s a world that is dominated by file-based applications. Data gets pumped into the system by robots, and object storage is awesome for that because it’s easy and you get lots of concurrency and lots of parallelism. However, you see humans building out algorithms and doing research and development work. They’re using file systems to do much of their programming, particularly in this high performance data analytics world. So, managing the convergence between file and object is an important thing to do to solve those use cases.

Go to Original Article
Author:

For Sale – Parts Clear Out (Motherboards, Memory, CPUs, GPUs and Case) ***PRICE DROPS***

Due to a recent upgrade, and the need to clear some space in the garage, I’ve got the following up for sale.

ADDED:

Current build bundle – £220.00
Asus Z97 Pro Gamer
LGA 1150 – Intel Core i5 4690K

Motherboards:
MSI Z87 GD65 Used £65.00 £55.00
MSI Z170I ITX Used £95.00 £90.00

DDR3:
8GB Corsair Vengeance Pro – 2133Mhz (2x4GB) Used £35.00
16GB HyperX Savage Red – 2400Mhz (2x8GB) Used £50.00
SOLD to scott178

DDR4:
16GB Corsair Low Profile Black – 2400Mhz (2x8GB) Used £45.00

Intel Processors:
LGA 1150 – Intel Core i5-4670K Used £65.00 £55.00
LGA 1151 – Intel Core i5-6600 Used £90.00 £85.00

AMD Graphics Cards:
XFX AMD R9 390 – 8GB Used £75.00 SOLD to Jeeva

Nvidia Graphics Cards:
MSI – GTX 660Ti 2GB Used £45.00 £35.00
MSI – GTX 570 2GB Used £35.00 £25.00
Pulled from Sale

Mice:
Razer Mamba Elite 2016 Wireless Used £60.00

Cases:
Phanteks Evolve ITX Used £40.00 £35.00 (Collection Only)

Coolers:
Corsair H50 Used £40.00 £35.00
Corsair H80i Used £50.00 £45.00

Most items will be boxed in their original retail or OEM packaging.

I will updating this thread as I discover anything else that I no longer require.

Open to offers.

Price and currency: £845
Delivery: Delivery cost is not included
Payment method: BT/PPG
Location: Oxford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

For Sale – Parts Clear Out (Motherboards, Memory, CPUs, GPUs and Case) ***PRICE DROPS***

Due to a recent upgrade, and the need to clear some space in the garage, I’ve got the following up for sale.

Motherboards:
MSI Z87 GD65 Used £65.00 £55.00
MSI Z170I ITX Used £95.00 £90.00

DDR3:
8GB Corsair Vengeance Pro – 2133Mhz (2x4GB) Used £35.00
16GB HyperX Savage Red – 2400Mhz (2x8GB) Used £50.00
SOLD to scott178

DDR4:
16GB Corsair Low Profile Black – 2400Mhz (2x8GB) Used £45.00

Intel Processors:
LGA 1150 – Intel Core i5-4670K Used £65.00 £55.00
LGA 1151 – Intel Core i5-6600 Used £90.00 £85.00

AMD Graphics Cards:
XFX AMD R9 390 – 8GB Used £75.00 SOLD to Jeeva

Nvidia Graphics Cards:
MSI – GTX 660Ti 2GB Used £45.00 £35.00
MSI – GTX 570 2GB Used £35.00 £25.00

Mice:
Razer Mamba Elite 2016 Wireless Used £60.00

Cases:
Phanteks Evolve ITX Used £40.00 £35.00 (Collection Only)

Coolers:
Corsair H50 Used £40.00 £35.00
Corsair H80i Used £50.00 £45.00

Most items will be boxed in their original retail or OEM packaging.

I will updating this thread as I discover anything else that I no longer require.

Open to offers.

Price and currency: £845
Delivery: Delivery cost is not included
Payment method: BT/PPG
Location: Oxford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

For Sale – Parts Clear Out (Motherboards, Memory, CPUs, GPUs and Case) ***PRICE DROPS***

Due to a recent upgrade, and the need to clear some space in the garage, I’ve got the following up for sale.

Motherboards:
MSI Z87 GD65 Used £65.00 £55.00
MSI Z170I ITX Used £95.00 £90.00

DDR3:
8GB Corsair Vengeance Pro – 2133Mhz (2x4GB) Used £35.00
16GB HyperX Savage Red – 2400Mhz (2x8GB) Used £50.00
SOLD to scott178

DDR4:
16GB Corsair Low Profile Black – 2400Mhz (2x8GB) Used £45.00

Intel Processors:
LGA 1150 – Intel Core i5-4670K Used £65.00 £55.00
LGA 1151 – Intel Core i5-6600 Used £90.00 £85.00

AMD Graphics Cards:
XFX AMD R9 390 – 8GB Used £75.00 SOLD to Jeeva

Nvidia Graphics Cards:
MSI – GTX 660Ti 2GB Used £45.00 £35.00
MSI – GTX 570 2GB Used £35.00 £25.00

Mice:
Razer Mamba Elite 2016 Wireless Used £60.00

Cases:
Phanteks Evolve ITX Used £40.00 £35.00 (Collection Only)

Coolers:
Corsair H50 Used £40.00 £35.00
Corsair H80i Used £50.00 £45.00

Most items will be boxed in their original retail or OEM packaging.

I will updating this thread as I discover anything else that I no longer require.

Open to offers.

Price and currency: £845
Delivery: Delivery cost is not included
Payment method: BT/PPG
Location: Oxford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author:

Manual mainframe testing persists in the age of automation

A recent study indicates that although most IT organizations recognize software test automation benefits their app development lifecycle, the majority of mainframe testing is done manually, which creates bottlenecks in the implementation of modern digital services.

The bottom line is that mainframe shops that want to add new, modern apps need to adopt test automation and they need to do it quickly or get left behind in a world of potential backlogs and buggy code.

However, while it’s true that mainframe shops have been slow to implement automated testing, it’s mostly been because they haven’t really had to; most mainframe shops are in maintenance mode, said Thomas Murphy, an analyst at Gartner.

“There is a need to clean up crusty old code, but that is less automated ‘testing’ and more automated analysis like CAST,” he said. “In an API/service world, I think there is a decent footprint for service virtualization and API testing and services around this. There are a lot of boutique consulting firms that also do various pieces of test automation.”

Yet, Detroit-based mainframe software maker Compuware, which commissioned the study conducted by Vanson Bourne, a market research firm, found that as many as 86% of respondents to its survey said they find it difficult to automate the testing of mainframe code. Only 7% of respondents said they automate the execution of test cases on mainframe code and 75% of respondents said they do not have automated processes that test code at every stage of development.

The survey polled 400 senior IT leaders responsible for application development in organizations with a mainframe and more than 1,000 employees.

Overall, mainframe app developers — as opposed to those working in distributed environments — have been slow to automate mainframe testing of code, but demand for new, more complex applications continues to grow to the point where 92% of respondents said their organization’s mainframe teams spend much more time testing code than was required in the past. On average, mainframe app development teams spend 51% of their time on testing new mainframe applications, features or functionality, according to the survey.

Shift left

To remedy this, mainframe shops need to “shift left” and bring automated testing, particularly automated unit testing, into the application lifecycle earlier to avoid security risks and improve the quality of their software. But only 24% of organizations reported that they perform both unit and functional mainframe testing on code before it is released into production. Moreover, automation and the shift to Agile and DevOps practices are “crucial” to the effort to both cut the time required to build and improve the quality of mainframe software, said Chris O’Malley, CEO of Compuware.

Yet, 53% of mainframe application development managers said the time required to conduct thorough testing is the biggest barrier to integrating the mainframe into Agile and DevOps.

IBM system z 13 mainframe
Mainframes continue to be viewed as the gold standard for data privacy, security and resiliency, though IT pros say there is not enough automated software testing for systems like the IBM system z, pictured here.

Eighty-five percent of respondents said they feel pressure to cut corners in testing that could result in compromised code quality and bugs in production code. Fifty percent said they fear cutting corners could lead to potential security flaws, 38% said they are concerned about disrupting operations and 28% said they are most concerned about the potential negative impact on revenue.

In addition, 82% of respondents said that the paucity of automated test cases could lead to poor customer experiences, and 90% said that automating more test cases could be the single most important factor in their success, with 87% noting that it will help organizations overcome the shortage of skilled mainframe app developers.

Automated mainframe testing tools in short supply

Truth be told, there are fewer tools available to automate the testing of mainframe software and there is not much to be found in the open source market.

And though IBM — and its financial results after every new mainframe introduction — might beg to differ, many industry observers, like Gartner’s Murphy, view the mainframe as dead.

The mainframe isn’t where our headspace is at. We use that new mainframe — the cloud — now.
Thomas MurphyAnalyst, Gartner

“The mainframe isn’t where our headspace is at,” Murphy said. “We use that new mainframe — the cloud — now. There isn’t sufficient business pressure or mandate. If there were a bunch of recurring issues, if the mainframe was holding us back, then people would address the problem. Probably by shooting the mainframe and moving elsewhere.”

Outside of the mainframe industry, companies such as Parasoft, SmartBear and others regularly innovate and deliver new automated testing functionality for developers in distributed, web and mobile environments. For instance, Parasoft earlier this fall introduced Selenic, its AI-powered automated testing tool for Selenium. Selenium is an automated testing suite for web apps that has become a de facto standard for testing user interfaces. Parasoft’s Selenic integrates into existing CI/CD pipelines to ease the way for organizations that employ DevOps practices. Selenic’s AI capabilities provide recommendations that automate the “self-healing” of any broken Selenium scripts and provide deep code analysis to users.

For its part, Gartner named SmartBear, another prominent test automation provider, as a leader in the 2019 Gartner Magic Quadrant for Software Test Automation. Among the highlights of what the company has done for developers in 2019, the company expanded into CI/CD pipeline integration for native mobile test automation with the acquisition of Bitbar, added new tools for behavior-driven development and introduced testing support for GraphQL.

Go to Original Article
Author:

For Sale – Parts Clear Out (Motherboards, Memory, CPUs, GPUs and Case) ***PRICE DROPS***

Due to a recent upgrade, and the need to clear some space in the garage, I’ve got the following up for sale.

Motherboards:
MSI Z87 GD65 Used £65.00 £55.00
MSI Z170I ITX Used £95.00 £90.00

DDR3:
8GB Corsair Vengeance Pro – 2133Mhz (2x4GB) Used £35.00
16GB HyperX Savage Red – 2400Mhz (2x8GB) Used £50.00 SOLD to scott178

DDR4:
16GB Corsair Low Profile Black – 2400Mhz (2x8GB) Used £45.00

Intel Processors:
LGA 1150 – Intel Core i5-4670K Used £65.00 £55.00
LGA 1151 – Intel Core i5-6600 Used £90.00 £85.00

AMD Graphics Cards:
XFX AMD R9 390 – 8GB Used £75.00 SOLD to Jeeva

Nvidia Graphics Cards:
MSI – GTX 660Ti 2GB Used £45.00 £35.00
MSI – GTX 570 2GB Used £35.00 £25.00

Mice:
Razer Mamba Elite 2016 Wireless Used £60.00

Cases:
Phanteks Evolve ITX Used £40.00 £35.00 (Collection Only)

Coolers:
Corsair H50 Used £40.00 £35.00
Corsair H80i Used £50.00 £45.00

Most items will be boxed in their original retail or OEM packaging.

I will updating this thread as I discover anything else that I no longer require.

Open to offers.

Price and currency: £845
Delivery: Delivery cost is not included
Payment method: BT/PPG
Location: Oxford
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Go to Original Article
Author: