Tag Archives: than

Data growth spawns enterprise data management system challenges

Organizations are creating and consuming more data than ever before, spawning enterprise data management system challenges and opportunities.

A key challenge is volume. With enterprises creating more data, they need to manage and store more data. Organizations are now also increasingly relying on the cloud for enterprise data management system storage needs because of the cloud’s scalability and low cost.

IDC’s Global DataSphere Forecast currently estimates that in 2020, enterprises will create and capture 6.4 zettabytes of new data. In terms of what types of new data is being created, productivity data — or operational, customer and sales data and embedded data — is the fastest-growing category, according to IDC. 

“Productivity data encompasses most of the data we create on our PCs, in enterprise servers or on scientific computers,” said John Rydning, research vice president for IDC’s Global DataSphere.

Productivity data also includes data captured by sensors embedded in industrial devices and endpoints, which can be leveraged by an organization to reduce costs or increase revenue.

Rydning also noted that IDC is seeing growth in productivity-related metadata, which provides additional data about the captured or created data that can be used to enable deeper analysis.

Most enterprises have low data maturity, according to ESG/Splunk survey.
Ranking organizations by data maturity, an Enterprise Strategy Group survey sponsored by Splunk found that few organizations are data innovators.

Enterprise data management system challenges in a world of data growth

Looking ahead, Rydning sees challenges for enterprise data management. 

Perhaps the biggest is dealing with the growing volume of archived data. With archival data, organizations will need to decide whether that data is best kept on relatively accessible storage systems for artificial intelligence analysis, or if it is more economical to move the data to lower-cost media such as tape, which is less readily available for analysis.

Another challenge is handling data from the edge of the network, which is expected to grow in the coming years. There too the question will be where organizations should store reference data for rapid analysis.

“Organizations will increasingly need to be prepared to keep up with the growth of data being generated across a wider variety of endpoint devices feeding workflows and business processes,” Rydning said.

The data management challenge in the cloud

In 2019, 34% of enterprise data was stored in the cloud. By 2024, IDC expects that 51% of enterprise data will be stored in the cloud.

While the cloud offers organizations a more scalable and often easier way to store data than on-premises approaches, not all that data has the same value.

Companies are continuing to dump data into storage without thinking about the applications that need to consume it.
Monte ZwebenCo-founder and CEO, Splice Machine

“Companies are continuing to dump data into storage without thinking about the applications that need to consume it,” said Monte Zweben, co-founder and CEO of Splice Machine. “They just substituted cheap cloud storage, and they continue to not curate it or transform it to be useful. It is now a cloud data swamp.”

The San Francisco-based vendor develops a distributed SQL relational database management system with integrated machine learning capabilities. While simply dumping data into the cloud isn’t a good idea, that doesn’t mean Zweben is opposed to the idea of cloud storage.

Indeed, Zweben suggested that organizations use the cloud, since cloud storage is relatively cheap. The key is to make sure that instead of just dumping data, enterprises find way to use that data effectively.

“You may later realize you need to train ML [machine learning] models on data that you previously did not think was useful,” Zweben said.

Enterprise data management system lessons from data innovators

“Without a doubt, some companies are storing a lot of low-value data in the cloud,” said Andi Mann, chief technology advocate at Splunk, an information security and event management vendor. “But it is tough to say any specific dataset is unnecessary for any given business.”

In his view, the problem isn’t necessarily storing data that isn’t needed, but rather storing data that isn’t being used effectively.

Splunk sponsored a March 2019 study conducted by Enterprise Strategy Group (ESG) about the value of data. The report, based on responses from 1,350 business and IT decision-makers, segments users by data maturity levels, with “data innovators” being the top category.

“While many organizations do have vast amounts of data — and that might put them in the data innovator category — the real difference between data innovators and the rest is not how much data they have, but how well they enable their business to access and use it,” Mann said.

Among the findings in the report is that 88% of data innovators employ highly skilled data investigators. However, even skilled people are not enough, so 85% of these innovative enterprises use best-of-breed analytics tools, and make sure to provide easy access to them.

“Instead of considering any data unnecessary, look at how to store even low-value data in a way that is both cost-effective, while allowing you to surface important insights if or when you need to,” Mann suggested. “The key is to treat data according to its potential value, while always being ready to reevaluate that value.”

Go to Original Article
Author:

Accessibility tools support Hamlin Robinson students learning from home | | Microsoft EDU

More than ever, educators are relying on technology to create inclusive learning environments that support all learners. As we recognize Global Accessibility Awareness Day, we’re pleased to mark the occasion with a spotlight on an innovative school that is committed to digital access and success for all.

Seattle-based Hamlin Robinson School, an independent school serving students with dyslexia and other language-based learning differences, didn’t set a specific approach to delivering instruction immediately after transitioning to remote learning. “Our thought was to send home packets of schoolwork and support the students in learning, and we quickly realized that was not going to work,” Stacy Turner, Head of School, explained in a recent discussion with the Microsoft Education Team.

After about a week into distance learning, the school quickly went to more robust online instruction. The school serves grades 1-8 and students in fourth-grade and up are utilizing Office 365 Education tools, including Microsoft Teams. So, leveraging those same resources for distance learning was natural.

Built-in accessibility features

Stacy said the school was drawn to Microsoft resources for schoolwide use because of built-in accessibility features, such as dictation (speech-to-text), and the Immersive Reader, which relies on evidence-based techniques to help students improve at reading and writing.

“What first drew us to Office 365 and OneNote were some of the assistive technologies in the toolbar,” Stacy said. Learning and accessibility tools are embedded in Office 365 and can support students with visual impairments, hearing loss, cognitive disabilities, and more.

Josh Phillips, Head of Middle School, says for students at Hamlin Robinson, finding the right tools to support their learning is vital. “When we graduate our students, knowing that they have these specific language-processing needs, we want them to have fundamental skills within themselves and strategies that they know how to use. But we also want them to know what tools are available to them that they can bring in,” he said.

For example, for students who have trouble typing, a popular tool is the Dictate, or speech-to-text, function of Office 365. Josh said that a former student took advantage of this function to write a graduation speech at the end of eighth grade. “He dictated it through Teams, and then he was able to use the skills we were practicing in class to edit it,” Josh said. “You just see so many amazing ideas get unlocked and be able to be expressed when the right tools come along.”

Supporting teachers and students

Providing teachers with expertise around tech tools also is a focus at Hamlin Robinson. Charlotte Gjedsted, Technology Director, said the school introduced its teachers to Teams last year after searching for a platform that could serve as a digital hub for teaching and learning. “We started with a couple of teachers being the experts and helping out their teams, and then when we shifted into this remote learning scenario, we expanded that use,” Charlotte said.

“Teams seems to be easiest platform for our students to use in terms of the way it’s organized and its user interface,” added Josh.

He said it was clear in the first days of distance learning that using Teams would be far better than relying on packets of schoolwork and the use of email or other tools. “The fact that a student could have an assignment issued to them, could use the accessibility tools, complete the assignment, and then return the assignment all within Teams is what made it clear that this was going to be the right app for our students,” he said. 

A student’s view

Will Lavine, a seventh-grade student at the school says he appreciates the stepped-up emphasis on Teams and tech tools during remote learning and says those are helping meet his learning needs. “I don’t have to write that much on paper. I can use technology, which I’m way faster at,” he said.

“Will has been using the ease of typing to his benefit,” added Will’s tutor, Elisa Huntley. “Normally when he is faced with a hand written assignment, he would spend quite a bit of time to refine his work using only a pencil and eraser. But when he interfaces with Microsoft Teams, Will doesn’t feeling the same pressure to do it right the first time. It’s much easier for him to re-type something. His ideas are flowing in ways that I have never seen before.”

Will added that he misses in-person school, but likes the collaborative nature of Teams, particularly the ability to chat with teachers and friends.

With the technology sorted out, Josh said educators have been very focused on ensuring students are progressing as expected. He says that teachers are closely monitoring whether students are joining online classes, engaging in discussions, accessing and completing assignments, and communicating with their teachers.

Connect, explore our tools

We love hearing from our educator community and students and families. If you’re using accessibility tools to create more inclusive learning environments and help all learners thrive, we want to hear from you! One great way to stay in touch is through Twitter by tagging @MicrosoftEDU.

And if you want to check out some of the resources Hamlin Robinson uses, remember that students and educators at eligible institutions can sign up for Office 365 Education for free, including Word, Excel, PowerPoint, OneNote, and Microsoft Teams.

In honor of Global Accessibility Awareness Day, Microsoft is sharing some exciting updates from across the company. To learn more visit the links below:

Go to Original Article
Author: Microsoft News Center

Long-delayed Dell EMC PowerStore midrange array makes debut

Enterprises can get a look at Dell EMC’s next-generation midrange storage, more than a year later than the array’s planned debut.

The Dell EMC PowerStore system that launched today marks the vendor’s first internally developed storage product since Dell bought EMC in 2015. Integration of Dell EMC-owned VMware is a key element, with an onboard ESXi hypervisor and capability to run applications on certain array models.

The base PowerStore is a 2U two-node enclosure for active-active failover and high availability. The chassis takes 25 NVMe SSDs, with support for Intel Optane persistent memory chips. Three 25-drive SAS expansion shelves can be added per chassis. Support for NVMe-f architecture is on Dell EMC’s roadmap.

The PowerStore midrange storage has been a strategic priority for several years. More than 1,000 engineers across Dell EMC storage and the wider Dell Technologies organization worked on the system, said Caitlin Gordon, senior vice president of Dell EMC storage marketing.

“Data has never been more diverse or more valuable, but customers have had to choose between prioritizing between service levels for performance and simplifying their operations. We know not every applications can be virtualized, and we engineered PowerStore so you can consolidate all workloads on a single platform,” Gordon said.

What’s next for Dell EMC midrange?

Dell EMC first scheduled the new midrange system to launch in 2019, but a series of delays pushed it back to now. The all-flash PowerStore adds to Dell EMC’s overlapping midrange storage, although the vendor said the new system would help streamline the portfolio. Dell EMC is the market leader in storage, with midrange platforms that include the Unity flagship all-flash and hybrid arrays that EMC brought to market. Other midrange systems include the SC Series and PS Series. Dell acquired Compellent and EqualLogic arrays years ago and renamed both products. Compellent is now known as SC Series and still sold and supported by Dell. The EqualLogic arrays were renamed PS Series, which Dell maintains but no longer sells. Dell EMC executives said the other systems will be phased out slowly with PowerStore’s arrival.

Dell EMC PowerStore midrange array
Dell EMC PowerStore midrange array

The PowerStoreOS operating system incorporates a Kubernetes framework to serve storage management from containers and includes a machine learning engine to automate rebalancing and other administrative tasks. Based on internal testing, Dell EMC claims PowerStore had seven times the performance and three times lower latency than Unity XT array.

The ground-up PowerStore design eventually will emerge as the dominant Dell EMC midrange storage, said Scott Sinclair, a storage analyst with Enterprise Strategy Group.

“This is a completely new architecture that’s based on a container framework. It’s designed to address a bunch of different workload needs on one array. That’s not the type of hard work you put into a product do just to add another midrange storage array,” Sinclair said.

A software capability called AppsOn allows data-intensive applications to access storage on PowerStore and use VMware vMotion to migrate it between core and cloud environments.

“The idea is you that can be within a VMware environment — let’s say VMware Cloud Foundation, or vSphere — and have different ways to move applications to various targets. AppsOn is a novel approach that gives you more flexibility to deploy apps, based on your resource needs,” Sinclair said

Beta customer tried to ‘blow up’ PowerStore

Dell EMC guarantees data reduction of 4-to-1 with always-on inline deduplication. Dell claims the inline data reduction does not degrade performance. Based on the ratio, a single Dell EMC PowerStore with three expansion enclosures is rated to provide 2.8 PB of usable storage per appliance. Effective capacity scales to 11.3 PB in a maximum eight-node cluster.

Five capacity models are available: PowerStore 1000 (384 TB), PowerStore 3000 (768 TB), PowerStore 5000 (1,152 TB), PowerStore 7000 (1,536 TB) and PowerStore 9000 (2,560 TB). PowerStore X models come with the VMware hypervisor and AppsOn, a software capability that allows data-intensive applications to access storage on the array across core and cloud environments. The Power T configuration does not include the latter features.

“I actually like the PowerStore X a lot more than I ever thought I would,” said Alan Hunt, the director of network operations for Detroit-based law firm Dickinson Wright. Hunt is running a PowerStore X and PowerStore T in beta to simulate live production. He said PowerStore will help Dickinson Wright to incorporate new storage with existing SC Series and retire PS Series arrays.

“We did a lot of testing and migrating of live workloads with the AppsOn feature, and that was excellent. We’re running simulate workloads and don’t have anything in production [on PowerStore], but I want to jump on it immediately. I take systems and try to blow them up, and this was definitely the most stable beta test I’ve ever done,” Hunt said.

Dell EMC initially said it would converge features of its multiple midrange arrays in 2019. The product launch was slated for Dell Tech World in Las Vegas in May, but that event was cancelled due to the coronavirus. Dell said it will have a virtual show later this year but has not specified dates.

Gordon said  PowerStore systems started shipping in April.

Go to Original Article
Author:

Avoid common pain points when migrating to Exchange Online

A migration from on-premises Exchange to Office 365 is more than just a matter of putting mailboxes into Microsoft’s cloud. There are several factors that can slow this type of project, and some issues won’t arise until you thought the project was done.

There are quite a few organizations still running an Exchange Server platform, but many of them are looking at migrating to Exchange Online and hand over some of the administrative burden to Microsoft. In my experience, I see four common problems for organizations that can be avoided. With a little preparation, you can avoid these stumbling blocks and make the experience a positive one for both IT and the end user.

Update on-premises software

Near the top of the list of common issues is not having the current versions of software running on premises.

Active Directory, on-premises Exchange, Outlook, Windows clients and servers all need to be up to date to give your organization the best possible migration experience. At one time, Microsoft’s organizational posture was more forgiving and would support older software, but today, the company wants all software that touches Exchange to be on the latest version. Some of the older Office suites will still work but only with basic functionality and end users will miss out on newer features, such as Focused Inbox.

That many enterprises struggle with keeping their software current isn’t a surprise, because it’s difficult to patch and deploy updates in a timely fashion. In some cases, organizations depend on third-party software that is rarely updated and may have compatibility issues with a frequent update schedule. There is no easy solution for these problems. But as IT pros, we need to sort through the updates and find a way to get all that software on the latest release.

Understand mail flow scenarios

The next area that hinders a lot of organizations migrating to Exchange Online is not understanding the different ways to set up mail flow into and out of Microsoft’s hosted email platform.

Only when you fully understand all the pieces in your organization’s transport stack can you set up a mail flow that meets your needs.

Microsoft designed Office 365 and Exchange Online to be very flexible with regards to the support of different mail flow scenarios. Email can go to on-premises Exchange first, then into Exchange Online. Mail can also go to Exchange Online first, then flow to the on-premises Exchange servers.

During a hybrid migration, the most common scenario is to leave the mail flow configuration to reach the on-premises Exchange Server first, then use hybrid configuration to forward email to mailboxes in the Microsoft cloud via the hybrid routing address. This hybrid routing address, which looks something like [email protected], is an attribute of the on-premises Active Directory account.

When you set up an Exchange hybrid deployment and move mailboxes properly, that address is automatically added to the user’s account. This mail flow arrangement tends to work very well, but if that address is not added to the users account, mail flow won’t work for that user.

Another popular option is to route email through Office 365 first, then to your on-premises mailboxes. This option puts Exchange Online Protection as the gatekeeper in front of all your organization’s mailboxes.

Ultimately, your decision comes down to what other services your organization has in that mail flow path. Some organizations use third-party antivirus products, some use a vendor’s encryption services, while others depend on a particular discovery application. Any of those third-party services may be cloud-based or installed on premises. Some of the services need to be placed before your end-user mailboxes in the transport flow, while others need to be at the end of the transport flow. There is no one-size fits-all configuration. Only when you fully understand all the pieces in your organization’s transport stack can you set up a mail flow that meets your needs.

Understand authentication

A move to the cloud means added complexity to your end-user authentication process. Microsoft provides a wide range of authentication options for Office 365 and Exchange Online, but that flexibility also means there are many choices to make during your migration.

Active Directory Federation Services, password hash sync and pass-through authentication are where the authentication options start, but any of those options can be deployed with multifactor authentication, conditional access and a whole load of Azure Information Protection options. Add in some encryption and the migration process gets complicated quickly.

All these choices and security add-ons help protect the business, but it’s a complex undertaking. It takes some effort not only to settle on a particular authentication but to implement it properly and do thorough testing to avoid an avalanche of help desk calls.

Understand accepted domains

Over time, many on-premises Exchange organizations tend to collect multiple accepted domains. Accepted domains are the part of the email address after the @ symbol.

I see many customers have issues when they move mailboxes to the cloud because they forgot to verify all the accepted domains used on those mailboxes. This problem is simple to avoid: Review the accepted domains in your on-premises Exchange organization and make sure they are verified in your Office 365 tenant before migrating the mailboxes.

Go to Original Article
Author:

For Sale – *NEW* ASUS P6X58D-E mobo, i7 960 (3.2Ghz) CPU & 24Gb Memory (6x4Gb 1333)

Wow, this sold far quicker than I thought, something tells me I undervalued the cost of this lot

Yes of course it includes the Intel CPU heatsink & fan along with the rear I/O shield.

Its been a while since I sold anything here, can I take tmknight’s offer for the asking price? or am I breaking any forum rules by doing this?

Thinking about this logically I should be asking ‘gamesaregood’ if he wants to match ‘tmknights’ offer, he did make an offer first?

Of course happy to receive any higher offers, but I am hedging my bets here

Thanks.

Go to Original Article
Author:

How to fortify your virtualized Active Directory design

Active Directory is much more than a simple server role. It has become the single sign-on source for most, if not all, of your data center applications and services. This access control covers workstation logins and extends to clouds and cloud services.

Since AD is such a key part of many organizations, it is critical that it is always available and has the resiliency and durability to match business needs. Microsoft had enough foresight to set up AD as a distributed platform that can continue to function — without much or, in some cases, no interruption in services — even if parts of the system went offline. This was helpful when AD nodes were still physical servers that were often spread across multiple racks or data centers to avoid downtime. So, the question now becomes, what’s the right way to virtualize Active Directory design?

Don’t defeat the native AD distributed abilities

Active Directory is a distributed platform, so virtualizing it will hinder the native distributed functionality of the software. AD nodes can be placed on different hosts and fail-over software will restart VMs if a host crashes, but what if your primary storage goes down? It’s one scenario you should not discount.

When you undertake the Active Directory design process for a virtualization platform, you must go beyond just a host failure and look at common infrastructure outages that can take out critical systems. One of the advantages of separate physical servers was the level of resiliency the arrangement provided. While we don’t want to abandon virtual servers, we must understand the limits and concerns associated with them and consider additional areas such as management clusters.

Management clusters are often slightly lower tier platforms — normally still virtualized — that only contain management servers, applications and infrastructure. This is where you would want to place a few AD nodes, so they are outside of the production environment they manage. The challenge with a virtualized management cluster is that it can’t be placed on the same physical storage location as production; this defeats the purpose of separation of duties. You can use more cost-effective storage platforms such as a virtual storage area network for shared storage or even local storage.

Remember, this is infrastructure and not core production, so IOPS should not be as much of an issue because the goal is resiliency, not performance. This means local drives and RAID groups should be able to provide the IOPS required.

How to keep AD running like clockwork

One of the issues with AD controllers in a virtualized environment is time drift.

All computers have clocks and proper timekeeping is critical to both the performance and security of the entire network. Most servers and workstations get their time from AD, which helps to keep everything in sync and avoids Kerberos security login errors.

These AD servers would usually get their time from a time source if they were physical or from the hosts if virtualized from them. The AD servers would then keep the time synchronized with the internal clock of the computer based on CPU cycles.

When you virtualize a server, it no longer has a set number of CPU cycles to base its time on. That means time can drift until it reaches out for an external time check to reset itself. But that time check can also be off since you might be unable to tell the passage of time until the next check, which compounds the issue. Time drift can become stuck in a nasty loop because the virtualization hosts often get their time from Active Directory.

Your environment needs an external time source that is not dependent on virtualization to keep things grounded. While internet time sources are tempting, having the infrastructure reach out for time checks might not be ideal. A core switch or other key piece of networking gear can offer a dependable time source that is unlikely to be affected by drift due to its hardware nature. You can then use this time source as the sync source for both the virtualization hosts and AD, so all systems are on the same time that comes from the same source.

Some people will insist on a single physical server in a virtualized data center for this reason. That’s an option, but one that is not usually needed. Virtualization isn’t something to avoid in Active Directory design, but it needs to be done with thought and planning to ensure the infrastructure can support the AD configuration. Management clusters are key to the separation of AD nodes and roles.

This does not mean that high availability (HA) rules for Hyper-V or VMware environments are not required. Both production and management environments should have HA rules to prevent AD servers from running on the same hosts.

Rules should be in place to ensure these servers restart first and have reserved resources for proper operations. Smart HA rules are easy to overlook as more AD controllers are added and the rules configuration is forgotten.

The goal is not to prevent outages from happening — that’s not possible. It is to have enough replicas and roles of AD in the right places so users won’t notice. You might scramble a little behind the scenes if a disruption happens, but that’s part of the job. The key is to keep customers moving along without them knowing about any of the issues happening in the background.

Go to Original Article
Author:

For Sale – Custom loop water cooled pc – i9 9900k, 2080ti, 32gb 3200mhz ram, 2tb nvme

Selling as only seems to be my work machine rather than playing games and creating content as intended

Built by myself in November 2019, machine is only a few months old.

Only the best components were chosen When this was built.

Machine runs at 5ghz on all cores and gpu never sees above 50c.

Motherboard – ASus maximus Code

Cpu – intel i9 9900k with ek water block

Gpu – msi ventus oc 2080ti with ek water block and nickel backplate

Ram- 32gb g skill royal silver 3200mhz

Nvme – 1tb wd black

Nvme – 1tb sabrent

Psu – Corsair 750 modular

Ek nickel fittings

Ek d5 stand alone pump

Phanteks reservoir

6 Thermaltake ring plus fans with controllers

2 360mm x 45mm alphacool radiators

Thermaltake acrylic tubes and liquid

Custom cables

I am based in Tadworth Surrey and the machine can be seen and inspected in person.

Go to Original Article
Author:

For Sale – Phanteks ITX Case & ThermalTake 730W Semi-Mod PSU – £59 delivered / 2 x Toshiba 1GB 7200 3.5″ SATA Drives – £25 delivered

The CPU is worth more than £70 IMHO. That puts it less than the Ryzen 3 3200G which is about £80 new and the 2400G is a much better processor all-round.

I have seen some go for that on eBay but you take you chances on there and there is no warranty, where as mine is covered until May 2020.

In the interest of striking up a deal I am willing drop to £225 delivered, but delivery will cost me at least £15 insured, so that’s really all the wiggle room I have with it on this occasion.

Jut to confirm, this is the same set of components I have been using for the last 8 months with no problems. (I am currently using it to type this reply on) so there is no issue with compatibility.

Go to Original Article
Author:

For Sale – Phanteks Enthoo ITX (Black / Red) & ThermalTake Smart SE 730W Semi Modular 80+ PSU – Reduced to £85 Delivered

The CPU is worth more than £70 IMHO. That puts it less than the Ryzen 3 3200G which is about £80 new and the 2400G is a much better processor all-round.

I have seen some go for that on eBay but you take you chances on there and there is no warranty, where as mine is covered until May 2020.

In the interest of striking up a deal I am willing drop to £225 delivered, but delivery will cost me at least £15 insured, so that’s really all the wiggle room I have with it on this occasion.

Jut to confirm, this is the same set of components I have been using for the last 8 months with no problems. (I am currently using it to type this reply on) so there is no issue with compatibility.

Go to Original Article
Author: