Tag Archives: there’s

AIOps meaning to expand throughout DevOps chain

It seems that every year there’s a new record for the pace of change in IT, from the move from mainframe to client/server computing, to embracing the web and interorganizational data movements. The current moves that affect organizations are fundamental, and IT operations had better pay attention.

Cloud providers are taking over ownership of the IT platform from organizations. Organizations are moving to a multi-cloud hybrid platform to gain flexibility and the ability to quickly respond to market needs. Applications have started to transition from monolithic entities to composite architectures built on the fly in real time from collections of functional services. DevOps has affected how IT organizations write, test and deliver code, with continuous development and delivery relatively mainstream approaches.

These fundamental changes mean that IT operations managers have to approach the application environment in a new way. Infrastructure health dashboards don’t meet their needs. Without deep contextual knowledge of how the platform looks at an instant, and what that means for performance, administrators will struggle to address issues raised.

Enter AIOps platforms

AIOps means IT teams use artificial intelligence to monitor the operational environment and rapidly and automatically remediate any problems that arise — and, more to the point, prevent any issues in the first place.

True AIOps-based management is not easy to accomplish. It’s nearly impossible to model an environment that continuously changes and then also plot all the dependencies between hardware, virtual systems, functional services and composite apps.

AIOPs use cases

However, AIOps does meet a need. It is, as yet, a nascent approach. Many AIOps systems do not really use that much artificial intelligence; many instead rely on advanced rules and policy engines to automatically remediate commonly known and expected issues. AIOps vendors collect information on operations issues from across their respective customer bases to make the tools more useful.

Today’s prospective AIOps buyers must beware of portfolio repackaging — AIOps on the product branding doesn’t mean they use true artificial intelligence. Question the vendor carefully about how its system learns on the go, deals with unexpected changes and manages idempotency. 2020 might be the year of AIOps’ rise, but it might also be littered with the corpses of AIOps vendors that get things wrong.

AIOps’ path for the future

As we move through 2020 and beyond, AIOps’ meaning will evolve. Tools will better adopt learning systems to model the whole environment and will start to use advanced methods to bring idempotency — the capability to define an end result and then ensure that it is achieved — to the fore. AIOps tools must be able to either take input from the operations team or from the platform itself and create the scripts, VMs, containers, provisioning templates and other details to meet the applications’ requirements. The system must monitor the end result from these hosting decisions and ensure that not only is it as-expected, but that it remains so, no matter how the underlying platform changes. Over time, AIOps tools should extend so that business stakeholders also have insights into the operations environment.

Such capabilities will mean that AIOps platforms move from just operations environment tool kits to part and parcel of the overall BizDevOps workflows. AIOps will mean an overarching orchestration system for the application hosting environment, a platform that manages all updates and patches, and provides feedback loops through the upstream environment.

The new generation of AIOps tools and platforms will focus on how to avoid manual intervention in the operations environment. Indeed, manual interventions are likely to be where AIOps could fail. For example, an administrator who puts wrong information into the flow or works outside of the AIOps system to make any configuration changes could start a firestorm of problems. When the AIOps system tries to fix them, it will find that it does not have the required data available to effectively model the change the administrator has made.

2020 will see AIOps’ first baby steps to becoming a major tool for the systems administrator. Those who embrace the idea of AIOps must ensure that they have the right mindset: AIOps has to be the center of everything. Only in extreme circumstances should any action be taken outside of the AIOps environment.

The operations team must reach out to the development teams to see how their feeds can integrate into an AIOps platform. If DevOps tools vendors realize AIOps’ benefits, they might provide direct integrations for downstream workflows or include AIOps capabilities into their own platform. This trend could expand the meaning of AIOps to include business capabilities and security as well.

As organizations move to highly complex, highly dynamic platforms, any dependency on a person’s manual oversight dooms the deployment to failure. Simple automation will not be a workable way forward — artificial intelligence is a must.

Go to Original Article
Author:

Bringing together deep bioscience and AI to help patients worldwide: Novartis and Microsoft work to reinvent treatment discovery and development   – The Official Microsoft Blog

In the world of commercial research and science, there’s probably no undertaking more daunting – or more expensive – than the process of bringing a new medicine to market. For a new compound to make it from initial discovery through development, testing and clinical trials to finally earn regulatory approval can take a decade or more. Nine out of 10 promising drug candidates fail somewhere along the way. As a result, on average, it costs life sciences companies $2.6 billion to introduce a single new prescription drug.

This is much more than just a challenge for life sciences companies. Streamlining drug development is an urgent issue for human health more broadly. From uncovering new ways to treat age-old sicknesses like malaria that still kills hundreds of thousands of people every year, to finding new cancer treatments, or developing new vaccines to prevent highly-contagious diseases from turning into global pandemics, the impact in terms of lives saved worldwide would be enormous if we could make inventing new medicines faster.

As announced today, this is why Novartis and Microsoft are collaborating to explore how to take advantage of advanced Microsoft AI technology combined with Novartis’ deep life sciences expertise to find new ways to address the challenges underlying every phase of drug development – including research, clinical trials, manufacturing, operations and finance. In a recent interview, Novartis CEO Vas Narasimhan spoke about the potential for this alliance to unlock the power of AI to help Novartis accelerate research into new treatments for many of the thousands of diseases for which there is, as yet, no known cure.

In the biotech industry, there have been amazing scientific advances in recent years that have the potential to revolutionize the discovery of new, life-saving drugs. Because many of these advances are based on the ability to analyze huge amounts of data in new ways, developing new drugs has become as much an AI and data science problem as it is a biology and chemistry problem. This means companies like Novartis need to become data science companies to an extent never seen before. Central to our work together is a focus on empowering Novartis associates at each step of drug development to use AI to unlock the insights hidden in vast amounts of data, even if they aren’t data scientists. That’s because while the exponential increase in digital health information in recent years offers new opportunities to improve human health, making sense of all the data is a huge challenge.

The issue isn’t just a problem of the overwhelming volume. Much of the information exists in the form of unstructured data, such as research lab notes, medical journal articles, and clinical trial results, all of which is typically stored in disconnected systems. This makes bringing all that data together extremely difficult. Our two companies have a dream. We want all Novartis associates – even those without special expertise in data science – to be able to use Microsoft AI solutions every day, to analyze large amounts of information and discover new correlations and patterns critical to finding new medicines. The goal of this strategic collaboration is to make this dream a reality. This offers the potential to empower everyone from researchers exploring the potential of new compounds and scientists figuring out dosage levels, to clinical trial experts measuring results, operations managers seeking to improve supply chains more efficiently, and even business teams looking to make more effective decisions. And as associates work on new problems and develop new AI models, they will continually build on each other’s work, creating a virtuous cycle of exploration and discovery. The result? Pervasive intelligence that spans the company and reaches across the entire drug discovery process, improving Novartis’ ability to find answers to some of the world’s most pressing health challenges.

As part of our work with Novartis, data scientists from Microsoft Research and research teams from Novartis will also work together to investigate how AI can help unlock transformational new approaches in three specific areas. The first is about personalized treatment for macular degeneration – a leading cause of irreversible blindness. The second will involve exploring ways to use AI to make manufacturing new gene and cell therapies more efficient, with an initial focus on acute lymphoblastic leukemia. And the third area will focus on using AI to shorten the time required to design new medicines, using pioneering neural networks developed by Microsoft to automatically generate, screen and select promising molecules. As our work together moves forward, we expect that the scope of our joint research will grow.

At Microsoft, we’re excited about the potential for this collaboration to transform R&D in life sciences. As Microsoft CEO Satya Nadella explained, putting the power of AI in the hands of Novartis employees will give the company unprecedented opportunities to explore new frontiers of medicine that will yield new life-saving treatments for patients around the world.

While we’re just at the beginning of a long process of exploration and discovery, this strategic alliance marks the start of an important collaborative effort that promises to have a profound impact on how breakthrough medicines and treatments are developed and delivered. With the depth and breadth of knowledge that Novartis offers in bioscience and Microsoft’s unmatched expertise in computer science and AI, we have a unique opportunity to reinvent the way new medicines are created. Through this process, we believe we can help lead the way forward toward a world where high-quality treatment and care is significantly more personal, more effective, more affordable and more accessible.

Tags: , , ,

Go to Original Article
Author: Steve Clarke

Chief transformation officer takes digital one step further

There’s a new player on the block when it comes to the team leading digital efforts within a healthcare organization.

Peter Fleischut, M.D., has spent the last two years leading telemedicine, robotics and robotic process automation and artificial intelligence efforts at New York-Presbyterian as its chief transformation officer, a relatively new title that is beginning to take form right alongside the chief digital officer.

Fleischut works as part of the organization’s innovation team under New York-Presbyterian CIO Daniel Barchi. Formerly the chief innovation officer for New York-Presbyterian, Fleischut described his role as improving care delivery and providing a better digital experience.

“I feel like we’re past the age of innovating. Now it’s really about transforming our care model,” he said.

What is a chief transformation officer?

The chief transformation officer is “larger than a technology or digital role alone,” according to Barchi.

Indeed, Laura Craft, analyst at Gartner, said she’s seeing healthcare organizations use the title more frequently to indicate a wider scope than, say, the chief digital officer.

The chief digital officer, a title that emerged more than five years ago, is often described as taking an organization from analog to digital. The digital officer role is still making inroads in healthcare today. Kaiser Permanente recently named Prat Vemana as its first chief digital officer for the Kaiser Foundation Health Plan and Hospitals. In the newly created role, Vemana is tasked with leading Kaiser Permanente’s digital strategy in collaboration with internal health plan and hospital teams, according to a news release.

A chief transformation officer, however, often focuses not just on digital but also emerging tech, such as AI, to reimagine how an organization does business.

“It has a real imperative to change the way [healthcare] is operating and doing business, and healthcare organizations are struggling with that,” Craft said. 

Barchi, who has been CIO at New York-Presbyterian for four years, said the role of chief transformation officer was developed by the nonprofit academic medical center to “take technology to the next level” and scale some of the digital programs it had started. The organization sought to improve not only back office functions but to advance the way it operates digitally when it comes to the patient experience, from hospital check-in to check-out.

I feel like we’re past the age of innovating. Now it’s really about transforming our care model.
Peter Fleischut, M.D.Chief transformation officer, New York-Presbyterian

Fleischut was selected for the role due to his background as a clinician, as well as the organization’s former chief innovation officer. He has been in the role for two years and is charged with further developing and scaling New York-Presbyterian’s AI, robotics and telemedicine programs.

The organization, which has four major divisions and is comprised of 10 hospitals, deeply invested in its telemedicine efforts and built a suite of services about four years ago. In 2016, it completed roughly 1,000 synchronous video visits between providers and patients. Now, the organization expects to complete between 500,000 and 1,000,000 video visits by the end of 2019, Fleischut said during his talk at the recent mHealth & Telehealth World Summit in Boston.

One of the areas where New York-Presbyterian expanded its telemedicine services under Fleischut’s lead was in emergency rooms, offering low-acuity patients the option of seeing a doctor virtually instead of in-person, which shortened patient discharge times from an average baseline of two and a half hours to 31 minutes.

The healthcare organization has also expanded its telemedicine services to kiosks set up in local Walgreens, and has a mobile stroke unit operating out of three ambulances. Stroke victims are treated in the ambulance virtually by an on-call neurologist.  

“At the end of the day with innovation and transformation, it’s all about speed, it’s all about time, and that’s what this is about,” Fleischut said. “How to leverage telemedicine to provide faster, quicker, better care to our patients.”

Transforming care delivery, hospital operations  

Telemedicine is one example of how New York-Presbyterian is transforming the way it interacts with patients. Indeed, that’s one of Fleischut’s main goals — to streamline the patient experience digitally through tools like telemedicine, Barchi said.

“The way you reach patients is using technology to be part of their lives,” Barchi said. “So Pete, in his role, is really important because we wanted someone focused on that patient experience and using things like telemedicine to make the patient journey seamless.” 

But for Fleischut to build a better patient experience, he also has to transform the way the hospital operates digitally, another one of his major goals.

As an academic medical center, Barchi said the organization invests significantly in advanced, innovative technology, including robotics. Barchi said he works with one large budget to fund innovation, information security and electronic medical records.

One hospital operation Fleischut worked to automate using robotics was food delivery. Instead of having hospital employees deliver meals to patients, New York-Presbyterian now uses large robots loaded with food trays that are programmed to deliver patient meals.

Fleischut’s work, Barchi said, will continue to focus on innovative technologies transforming the way New York-Presbyterian operates and delivers care.

“Pete’s skills from being a physician with years of experience, as well as his knowledge of technology, allow him to be truly transformative,” Barchi said.

In his role as chief transformation officer, Fleischut said he considers people and processes the most important part of the transformation journey. Without having the right processes in place for changing care delivery and without provider buy-in, the effort will not be a success, he said.

“Focusing on the people and the process leads to greater adoption of technologies that, frankly, have been beneficial in other industries,” he said.

Go to Original Article
Author:

Enterprises that use data will thrive; those that don’t, won’t

There’s a growing chasm between enterprises that use data, and those that don’t.

Wayne Eckerson, founder and principal consultant of Eckerson Group, calls it the data divide, and according to Eckerson, the companies that will thrive in the future are the ones that are already embracing business intelligence no matter the industry. They’re taking human bias out of the equation and replacing it with automated decision-making based on data and analytics.

Those that are data laggards, meanwhile, are already in a troublesome spot, and those that have not embraced analytics as part of their business model at all are simply outdated.

Eckerson has more than 25 years of experience in the BI industry and is the author of two books — Secrets of Analytical Leaders: Insights from Information Insiders and Performance Dashboards: Measuring, Monitoring, and Managing Your Business.  

In the first part of a two-part Q&A, Eckerson discusses the divide between enterprises that use data and those that don’t, as well as the importance of DataOps and data strategies and how they play into the data divide. In the second part, he talks about self-service analytics, the driving force behind the recent merger and acquisition deals, and what intrigues him about the future of BI.

How stark is the data divide, the gap between enterprises that use data and those that don’t?

Wayne Eckerson: It’s pretty stark. You’ve got data laggards on one side of that divide, and that’s most of the companies out there today, and then you have the data elite, the companies [that] were born on data, they live on data, they test everything they do, they automate decisions using data and analytics — those are the companies [that] are going to take the future. Those are the companies like Google and Amazon, but also companies like Netflix and its spinoffs like Stitch Fix. They’re heavily using algorithms in their business. Humans are littered with cognitive biases that distort our perception of what’s going on out there and make it hard for us to make objective, rational, smart decisions. This data divide is a really interesting thing I’m starting to see happening that’s separating out the companies [that] are going to be competitive in the future. I think companies are really racing, spending money on data technologies, data management, data analytics, AI.

How does a DataOps strategy play into the data divide?

Headshot of Wayne Eckerson, founder and principal consultant of Eckerson GroupWayne Eckerson

Eckerson: That’s really going to be the key to the future for a lot of these data laggards who are continually spending huge amounts of resources putting out data fires — trying to fix data defects, broken jobs, these bottlenecks in development that often come from issues like uncoordinated infrastructure for data, for security. There are so many things that prevent BI teams from moving quickly and building things effectively for the business, and a lot of it is because we’re still handcrafting applications rather than industrializing them with very disciplined routines and practices. DataOps is what these companies need — first and foremost it’s looking at all the areas that are holding the flow of data back, prioritizing those and attacking those points.

What can a sound DataOps strategy do to help laggards catch up?

Eckerson: It’s improving data quality, not just at the first go-around when you build something but continuous testing to make sure that nothing is broken and users are using clean, validated data. And after that, once you’ve fixed the quality of data and the business becomes more confident that you can deliver things that make sense to them, then you can use DataOps to accelerate cycle times and build more things faster. This whole DataOps thing is a set of development practices and testing practices and deployment and operational practices all rolled into a mindset of continuous improvement that the team as a whole has to buy into and work on. There’s not a lot of companies doing it yet, but it has a lot of promise.

Data strategy differs for each company given its individual needs, but as BI evolves and becomes more widespread, more intuitive, more necessary no matter the size of the organization and no matter the industry, what will be some of the chief tenets of data strategy going forward?

Eckerson: Today, companies are racing to implement data strategies because they realize they’re … data laggard[s]. In order to not be disrupted in this whole data transformation era, they need a strategy. They need a roadmap and a blueprint for how to build a more robust infrastructure for leveraging data, for internal use, for use with customers and suppliers, and also to embed data and analytics into the products that they build and deliver. The data strategy is a desire to catch up and avoid being disrupted, and also as a way to modernize because there’s been a big leap in the technologies that have been deployed in this area — the web, the cloud, big data, big data in the cloud, and now AI and the ability to move from reactive reporting to proactive predictions and to be able to make recommendations to users and customers on the spot. This is a huge transformation that companies have to go through, and so many of them are starting at zero.

So it’s all about the architecture?

Eckerson: A fundamental part of the data strategy is the data architecture, and that’s what a lot of companies focus on. In fact, for some companies the data strategy is synonymous with the data architecture, but that’s a little shortsighted because there are lots of other elements to a data strategy that are equally important. Those include the organization — the people and how they work together to deliver data capabilities and analytic capabilities — and the culture, because you can build an elegant architecture, you can buy and deploy the most sophisticated tools. But if you don’t have a culture of analytics, if people don’t have a mindset of using data to make decisions, to weigh options to optimize processes, then it’s all for naught. It’s the people, it’s the processes, it’s the organization, it’s the culture, and then, yes, it’s the technology and the architecture too.

Editors’ note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

How to deal with the on-premises vs. cloud challenge

For some administrators, the cloud is not a novelty. It’s critical to their organization. Then, there’s you, the lone on-premises holdout.

With all the hype about cloud and Microsoft’s strong push to get IT to use Azure for services and workloads, it might seem like you are the only one in favor of remaining in the data center in the great on-premises vs. cloud debate. The truth is the cloud isn’t meant for everything. While it’s difficult to find a workload not supported by the cloud, that doesn’t mean everything needs to move there.

Few people like change, and a move to the cloud is a big adjustment. You can’t stop your primary vendors from switching their allegiance to the cloud, so you will need to be flexible to face this new reality. Take a look around at your options as more vendors narrow their focus away from the data center and on-premises management.

Is the cloud a good fit for your organization?

The question is: Should it be done? All too often, it’s a matter of money. For example, it’s possible to take a large-capacity file server in the hundreds of terabytes and place it in Azure. Microsoft’s cloud can easily support this workload, but can your wallet?

Once you get over the sticker shock, think about it. If you’re storing frequently used data, it might make business sense to put that file server in Azure. However, if this is a traditional file server with mostly stale data, then is it really worth the price tag as opposed to using on-premises hardware?

Azure file server
When you run the numbers on what it takes to put a file server in Azure, the costs can add up.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks. Part of the calculation in determining what makes sense in an operational budget structure, as opposed to a capital expense, is the people factor. Too often, admins find themselves in a situation where management sees one side of this formula and wants to make that cloud leap, while the admins must look at the reality and explain both the pros and cons — the latter of which no one wants to hear.

Part of the on-premises vs. cloud dilemma is you have to weigh the financial costs, as well as the tangible benefits and drawbacks.

The cloud question also goes deeper than the Capex vs. Opex argument for the admins. With so much focus on the cloud, what happens to those environments that simply don’t or can’t move? It’s not only a question of what this means today, but also what’s in store for them tomorrow.

As vendors move on, the walls close in

With the focus for most software vendors on cloud and cloud-related technology, the move away from the data center should be a warning sign for admins that can’t move to the cloud. The applications and tools you use will change to focus on the organizations working in the cloud with less development on features that would benefit the on-premises data center.

One of the most critical aspects of this shift will be your monitoring tools. As cloud gains prominence, it will get harder to find tools that will continue to support local Windows Server installations over cloud-based ones. We already see this trend with log aggregation tools that used to be available as on-site installs that are now almost all SaaS-based offerings. This is just the start.

If a tool moves from on premises to the cloud but retains the ability to monitor data center resources, that is an important distinction to remember. That means you might have a workable option to keep production workloads on the ground and work with the cloud as needed or as your tools make that transition.

As time goes on, an evaluation process might be in order. If your familiar tools are moving to the cloud without support for on-premises workloads, the options might be limited. Should you pick up new tools and then invest the time to install and train the staff how to use them? It can be done, but do you really want to?

While not ideal, another viable option is to take no action; the install you have works, and as long as you don’t upgrade, everything will be fine. The problem with remaining static is getting left behind. The base OSes will change, and the applications will get updated. But, if your tools can no longer monitor them, what good are they? You also introduce a significant security risk when you don’t update software. Staying put isn’t a good long-term strategy.

With the cloud migration will come other choices

The same challenges you face with your tools also apply to your traditional on-premises applications. Longtime stalwarts, such as Exchange Server, still offer a local installation, but it’s clear that Microsoft’s focus for messaging and collaboration is its Office 365 suite.

The harsh reality is more software vendors will continue on the cloud path, which they see as the new profit centers. Offerings for on-premises applications will continue to dwindle. However, there is some hope. As the larger vendors move to the cloud, it opens up an opportunity in the market for third-party tools and applications that might not have been on your radar until now. These products might not be as feature-rich as an offering from the larger vendors, but they might tick most of the checkboxes for your requirements.

Go to Original Article
Author:

New enterprise 14 TB HDD options cater to cloud-scale users

The latest wave of helium-based 14 TB hard disk drives shows there’s still innovative life in the storage media, mainly for cloud-scale users in need of economical, high-density drives.

Seagate Technology this week launched its new 14 TB helium hard drive portfolio. It includes the 12 Gbps SAS- and 6 Gbps SATA-based Exos X14 designed for hyperscale data centers and SATA-based IronWolf for NAS, SkyHawk for surveillance systems and BarraCuda Pro for desktop workstations and DAS.

Toshiba recently disclosed that it’s sampling 14 TB and 12 TB helium-sealed 7,200 rpm SAS HDDs with 12 Gbps data transfer rates. They join Toshiba’s previous helium-based 14 TB HDD that uses the 6 Gbps SATA interface.

Earlier this year, Western Digital began shipping helium-based 12 Gbps SAS- and 6 Gbps SATA-based 14 TB HDD models to select hyperscale cloud customers. China-based internet service provider Tencent, a customer of Western Digital’s 12 TB HelioSeal HDDs, confirmed it was qualifying the latest 14 TB Ultrastar DC HC530 drives. The 14 TB HDD models use helium instead of air to reduce aerodynamic drag and turbulence inside the drives. Helium-based HDDs require less power and enable manufacturers to use thinner disks and squeeze in more tracks to store more data per HDD.

Shingled magnetic recording

In June, Western Digital disclosed Dropbox as a customer for its 3.5-inch helium-based 14 TB Ultrastar DC HC620 HDD. The drive — formerly called the Ultrastar Hs14 — uses host-managed shingled magnetic recording (SMR) technology, in contrast to the two-dimensional magnetic recording the Ultrastar DC HC530 uses.

Dropbox qualified and deployed Western Digital’s SMR-based HC620 HDD for its custom-built Magic Pocket storage infrastructure. A Dropbox blog noted the SMR HDDs offer higher bit density and lower cost per GB than HDDs that use conventional perpendicular magnetic recording technology, which aligns bits vertically rather than horizontally.

Conventional magnetic recording enables random data writes across the entire disk, whereas SMR is best suited to writing data in sequential mode. Dropbox chose the host-managed SMR HDDs for the Magic Pocket workload because its storage architecture is geared for sequential writes.

But the Dropbox scenario is not the norm at this point. John Rydning, an IDC analyst who focuses on HDDs, said demand is low for SMR technology because customers would need to tweak their systems in order to use drives designed for sequential rather than random writes.

“With SMR, tracks on the disk are partially overwritten intentionally — like shingles on a roof — to increase the number of tracks on the disk and increase the storage capacity per disk,” Rydning wrote in an email. “Yet, it’s difficult to change or modify data once it’s written to the disk.

Seagate 14 TB HDD
Seagate’s IronWolf 14 TB HDD is designed for network-attached storage.

“Think of what it’s like to change a shingle in the middle of a roof. You have to peel off a few rows to change one shingle. Similar for SMR HDDs. To change data already written to the disk, you have to read several tracks to cache, modify the data in cache, then rewrite to the disk.”

Rydning said the competition for 14 TB HDD market share in the near term will center on drives that use conventional magnetic recording rather than SMR technology.

Additional high-density HDD technologies in the works include heat-assisted magnetic recording and microwave-assisted magnetic recording (MAMR), but those options have yet to emerge in commercially shipping products. Western Digital said last year that it expects to begin shipping ultra-high capacity MAMR HDDs in 2019 and claimed that MAMR has the potential to enable HDDs of 40 TB and beyond by 2025.

Toshiba’s nine-platter 14 TB HDD

In the conventional magnetic recording space, Toshiba’s new MG07SCA 14 TB HDD uses a nine-platter helium-sealed mechanical design unlike the eight-disk 14 TB options from rivals Seagate and Western Digital. Toshiba uses eight platters with its helium-based 12 TB MG07 model, after using seven platters with its traditional air-based MG06 10 TB generation, said Scott Wright, the company’s director of HDD marketing.

Toshiba lagged competitors on previous product generations with 8 TB, 10 TB and 12 TB HDDs, IDC’s Rydning said. The company is now time-to-market competitive with Western Digital and Seagate with 14 TB HDDs for cloud-scale customers and positions itself well for higher capacities in the future, he said.

But Toshiba’s nine-platter 14 TB HDDs carry a higher manufacturing cost per drive for the extra disk and two heads than the eight-platter 14 TB HDDs from Western Digital and Seagate, Rydning said. “On the other hand, given that Western Digital and Seagate are using the higher areal density heads and disk, they may encounter production yield challenges in the first one or two quarters of production,” he said.

Toshiba didn’t disclose retail pricing for the MG07SCA 14 TB HDDs.

Enterprise HDD market leaders

Seagate and Western Digital lead the market in nearline enterprise-class 7,200 rpm SAS and SATA HDDs, according to Trendfocus vice president John Chen. Seagate has the edge in units shipped, and Western Digital leads in exabytes shipped, Chen said. SATA is the dominant interface for nearline HDDs, with SAS at only about 20% unit share, he said.

SATA HDDs are single ported, but Toshiba’s Wright said there’s still demand for dual-ported SAS HDDs among users who want to map redundant paths to each storage device for high availability. He said some customers want to be able to support SSDs and HDDs in a common architecture, with the same 12 GBs data transfer rate.

Wright doesn’t expect the new wave of upcoming 3D NAND flash-based SSDs that store four bits of data per cell — known as quadruple-level cell (QLC) — to eat into the nearline enterprise HDD market that is “all about cost of capacity.”

“Where we see solid state challenging HDD is in very low capacities, at the lower end of the notebook PC market, and then also, of course, on performance at the high end of the enterprise market where some SSDs are being used in place of the traditional 15,000 rpm types of high-performance [hard disk] drives,” Wright said.

For Trade – 2017 27” iMac 3.4Ghz i5 5k retina – 16GB RAM / 1TB FusionDrive

Posting this on behalf of my brother. Feel free to communicate with me and if there’s serious interest, I can send you his details via PM

For trade or possible straight sale: 2017 27” iMac bought March 2018 with email receipt to show outright purchase

I’m seriously considering either selling this or swapping it out for a decent spec’d Mac mini (must have a min of 16GB RAM and either a 256GB SSD or above) OR a 15.4″ MBP with decent RAM and an SSD

Really would need them to have warranty (either normal Apple or AppleCare etc) ideally and they need to be in superb condition

We’ve just put pen to paper, so to speak, to get an extension done as we feel it’d be the right time now and before, if we’re lucky enough, to start a family…! So, my office needs to be small and fluid, rather than a huge office desk and 27″ iMac and hifi stuff, as it is currently, as it’s going to get knocked down, so I may end up working from the living room…!

iMac spec: 27-inch iMac with Retina 5K display

Bought March 2018, so well within warranty and within the AppleCare addition period. In absolute pristine condition/as new, fully boxed with power cable (no mouse or keyboard though – can be picked up easy enough and I feel these are a personal preference too (ie. I use a trackpad and fully wired, full-size keyboard)

I paid full price for it and would be after £1450 with no offers

As for the swap; it’d be the Mac Mini plus cash my way, or the same with the MBP and cash on top to equate the value of my iMac

I’d consider an MBP with the value of £1k maximum, ideally less, or a Mac mini with the value of £500 or less. But feel free to drop me a line with what you’ve got if you’re interested in this

I’m in Preston, Lancs and would rather this be sorted in person. But happy to send the iMac at a 50/50 split of the cost. If the Mac mini/MBP seller isn’t local; I’d need to have that sent to me first, so I can shift everything across correctly and then fully wipe/reinstall the iMac, should take a day at max

Let me know if this could be of interest, or feel free to drop me a PM

Price and currency: 1450
Delivery: Goods must be exchanged in person
Payment method: Cash ideally, but happy with a bank transfer, but no PayPal
Location: Higher Walton, just outside of Preston
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Mac OS X 10.5 Leopard – retail version DVD never installed

I’ve sold a copy of Leopard before and I was surprised there’s still a market for older OS dvds.

Found this at the bottom of my drawer so it’s no use to me anymore

It’s never been installed so very good condition.

View attachment 1013997

View attachment 1013998

Price and currency: £30.00
Delivery: Delivery cost is included within my country
Payment method: PPG
Location: Retford
Advertised elsewhere?: Advertised elsewhere
Prefer goods…

Mac OS X 10.5 Leopard – retail version DVD never installed

Faulty T300 Chi 8GB 128GB 12.5 Core M

T300 Chi for sale, has a crack – see pics below. Touch still works however, there’s an area where near the crack and bottom where the bottom doesn’t respond to touch. Its not noticable when the screen is on and docked to the keyboard but nevertheless its there.

Another issue is the tablet keeps randomly going to sleep.

So asking for 75.

Price and currency: 75
Delivery: Delivery cost is included within my country
Payment method: bank transfer
Location: dorset…

Faulty T300 Chi 8GB 128GB 12.5 Core M

Macbook Pro Early 2011, 13″, 2.7GHz – 8GB ram – 256SSD + 500HDD

Hello guys,

So, I have this Macbook and I’m posting just to see if there’s any interest.

It’s this one: MacBook Pro (13-inch, Early 2011) – Technical Specifications with 2.7GHz processor.

I’ve been using it as a Plex server for quite some while now, but I’ve upgraded and this is unnecessary now. Been running solid without any problems for years, as a personal computer and as a server after. Champ. Don’t want to let it go but…

Macbook Pro Early 2011, 13″, 2.7GHz – 8GB ram – 256SSD + 500HDD