Games are a source of joy, inspiration, and social connection. They have the power to bring us together, create empathy, and strengthen our social fabric. As we prepare for the next generation, our efforts to make gaming more inclusive, more immersive, more connected, and more social are as relevant and important as ever.
At Xbox, we listen to what you – players, game developers, and content creators – tell us you want from the future of gaming. Based on your feedback, we’re building a future where you and your friends can play the deepest, most immersive and interactive games ever created across your Xbox console, PC, and mobile devices.
At the dawn of the next generation, it’s important to be clear about what you can expect from the future of Xbox.
Our vision has one hero at the heart of it all: You.
And today, I want to share our commitments to you:
You will always be welcome. We are building Xbox for you—players from all walks of life, everywhere in the world. We want to make your Xbox community safe, accessible, and welcoming – a place where you can have fun. As we say in our community standards, harassment and hate take many forms, but none have a home on Xbox. Should you feel others are behaving in ways that violate the standards, our safety team will investigate your report and support you 24/7/365 around the globe. And we continue to accelerate new technology to reduce hate speech and toxicity, giving you the tools to create the safe gaming community you want to play in.
In addition to tools, we commit to bringing more diverse stories to Xbox for you to enjoy. We are empowering creators of diverse backgrounds to develop new stories, advocating for an authentic and respectful representation in games, and championing accessibility so that all can play. Additionally, more than 300,000 Xbox Ambassadors give their time and passion to making Xbox the best place to play and we invite all players to join us on that mission. We still have so much more work to do and will not stop until everyone who plays feels welcome, heard, and valued.
Your games will look and play best on Xbox Series X. Xbox Series X is designed to deliver a new level of fidelity, feel, performance and precision never seen before in console gaming. All games will look and play best on Xbox Series X – whether they come from our 15 Xbox Game Studios, like Halo Infinite, or from our world-class publisher and developer partners. Packing over 12 teraflops of GPU power including new technologies like hardware-accelerated Direct X raytracing and variable rate shading, and with four times the processing power of an Xbox One X, Xbox Series X enables developers to provide you with transformative gaming experiences through richer, more dynamic living worlds, more realistic AI and animations, and support for higher frame rates including support for up to 120 FPS.
Xbox Series X also enables you to spend less time waiting and more time playing, as it virtually eliminates load times with the 40x boost in I/O throughput from last generation. With our custom next-generation SSD and Xbox Velocity Architecture, nearly every aspect of playing games is improved. Game worlds are larger, more dynamic and load in a flash, and fast travel is just that – fast. The Xbox Velocity Architecture also powers new platform capabilities like Quick Resume, which enables you to seamlessly switch between multiple titles and resume instantly from where you last left off without waiting through long loading screens. Right now, Xbox Series X is in the hands of our 15 Xbox Game Studios teams and thousands of third-party developers, empowering them to create a new generation of blockbuster games for you to enjoy.
You play new games day one with Xbox Game Pass. All Xbox Game Studios titles launch into Xbox Game Pass the same day as their global release, so you decide whether to purchase each game separately or play them all with your Xbox Game Pass membership. Xbox Game Studios franchises that will launch into Game Pass day one of release include Halo, Forza,Age of Empires, Gears of War,Minecraft,Hellblade,The Outer Worlds, Psychonauts,Microsoft Flight Simulator,State of Decay,Wasteland,Minecraft Dungeons andSea of Thieves—and more new franchises in early development. So, when Halo Infinite launches, you and your friends can decide whether to purchase the game or play it with Xbox Game Pass.
You won’t be forced into the next generation. We want every Xbox player to play all the new games from Xbox Game Studios. That’s why Xbox Game Studios titles we release in the next couple of years—like Halo Infinite—will be available and play great on Xbox Series X and Xbox One. We won’t force you to upgrade to Xbox Series X at launch to play Xbox exclusives.
Your games will not be left behind, thanks to backward compatibility. You will be able to play four generations of games on Xbox Series X on day one. That makes it the largest launch lineup for any new console ever, with thousands of games to play. Our backward compatibility engineers have spent years devising innovative ways for modern, next-gen technology to make the games library you’re building today even better, at no additional cost and with no work from developers. It’s our intent for all Xbox One games that do not require Kinect to play on Xbox Series X at the launch of the console. And because of the unprecedented power of Xbox Series X, most of your favorite games will load faster and look and perform many times better on the new console.
Your Xbox One gaming accessories come into the future with you, too. The Xbox Elite Controller and Xbox Adaptive Controller all work on Xbox Series X, so you don’t have to purchase new controllers. We believe that your investments in gaming should move with you into the next generation.
You can buy games once at no added cost. With our new Smart Delivery technology, you don’t need to buy the same game twice – once for the current console generation and once for the next generation. You always have the best available version of supported games on whatever Xbox console you are playing on, at no additional cost. If you own a title that supports Smart Delivery like Destiny 2,Gears 5 and Halo Infinite, you automatically have access to the version that plays best on your Xbox console. Highly anticipated games from the world’s biggest developers, like Assassin’s Creed Valhalla,Cyberpunk 2077, Marvel’s Avengers and more have already committed to supporting Smart Delivery and more will be announced soon.
Xbox Play Anywhere digital titles also enable you to buy once and play on both Xbox consoles and Windows 10 PCs.
You choose how to jump into the next generation of gaming. We hear from you that you prefer choice and value. With Xbox All Access, you can get Xbox Series X, and Xbox Game Pass Ultimate for a low monthly price with no up-front costs, no finance charges and no hidden fees. You get to enjoy an instant library of over 100 high-quality games, join friends with online multiplayer, and experience new Xbox Game Studios titles the day they release, including Halo Infinite, on the fastest, most powerful Xbox ever.
You are in control of a healthy and balanced gaming lifestyle. If you are a parent, guardian or caregiver, the new Xbox Family Settings app (Preview) for iOS and Android provides a simple and convenient way to create child accounts, customize family settings, and ensure that your kids have access to gaming that you feel is appropriate.
You will get more from your Xbox Game Pass Ultimate membership. Finally, today we’re announcing that this September, in supported countries, we’re bringing Xbox Game Pass and Project xCloud together at no additional cost for Xbox Game Pass Ultimate members. With cloud gaming in Game Pass Ultimate, you will be able to play over 100 Xbox Game Pass titles on your phone or tablet. And because Xbox Live connects across devices, you can play along with the nearly 100 million Xbox Live players around the world. So when Halo Infinite launches, you and your friends can play together and immerse yourselves in the Halo universe as Master Chief—anywhere you go and across devices.
Cloud gaming in Xbox Game Pass Ultimate means your games are no longer locked to the living room. You can connect more than ever with friends and family through gaming. And just like you do with your movie and music streaming services, when cloud gaming launches into Xbox Game Pass Ultimate, you can continue your game wherever you left off on any of your devices.
When you add it all up, Xbox Series X is the only next generation console that lets you play new blockbuster games at the highest console fidelity, enjoy the latest blockbuster games the day they launch for one monthly price, play four generations of your games at higher fidelity than ever before, and play with friends wherever you want across your TV, PC, and mobile device. And with Xbox All Access, you can jump into the next generation for one low monthly payment and no up-front costs.
The future of gaming has never been more exciting and limitless. It’s a future you’ll explore on your terms, not constrained by restrictive policies. Where your gaming legacy will not be left behind and where you will not be locked out of new exclusive Xbox Game Studios games even if you choose to stay with your current console for a while.
It’s a future where you and your friends play the most immersive, responsive, and vivid games together on every screen in your life, and where games reach across the world and bring you stories you’ve never experienced before. It’s a future in which you get more value from your games. And where everyone is welcome.
We hope you’ll join us next Thursday, July 23rd for the Xbox Games Showcase for the first look at the Halo Infinite Campaign and more.
Systems integrators can cut staffing costs and boost project success rates if they take a cue from the gig economy.
A Constellation Research study published this week reported gig economy IT projects used 30% fewer FTEs over time compared with projects staffed the traditional way. In addition, gig economy projects experienced a 9% failure rate versus the IT industry average of 70% to 81%, according to market research firm. Large IT initiatives such as digital transformation projects are especially prone to encountering obstacles.
Gig economy staffing “lowers risk substantially by boosting IT success rates dramatically,” according to Constellation Research’s report, “How the Gig Economy Is Reshaping Tech Careers and IT Itself.” The report’s findings are based on Constellation Research’s analysis of project data from Gigster, a gig economy platform that focuses on IT staffing.
The research firm said it compared data from 190 Gigster projects with typical industry projects. Gigster helped fund the research.
From outsourcing to crowdsourcing
Gigster CEO Chris Keene said the gig approach — crowdsourcing IT personnel on demand — will “change the way systems integrators think about their talent.” The Constellation Research report, he added, suggests the application of gig economy best practices “could be as big for systems integrators as outsourcing was years ago.”
Outsourcing’s labor arbitrage made its mark in the systems integration field beginning in the 1990s. Today, crowdsourcing and what Keene refers to as “elastic staffing,” also aim to reduce personnel expenses on IT projects. He said Gigster’s Innovation Management Platform, a SaaS offering, lets organizations assess talent based on the quality of the work an IT staffer performed on previous projects. The tool provides sentiment analysis, polling an IT staffer’s teammates and customers to gauge their satisfaction. The tool’s elastic staffing capabilities, meanwhile, are used to identify on-demand peer experts who review a project’s key deliverables. As an integrator reaches a project milestone, it can use the Gigster platform to conduct sentiment analysis and an expert review before moving on to the next phase.
The increase in staffing efficiency stems from elastic versus static approaches. The Constellation Research report noted personnel assigned to a traditional project tend to remain on the team, even when their skills are not in high demand during particular project phases.
“Activity shifts among project team members over time, resulting in a relatively inefficient model because underutilized people continue to add to the project budget and overhead,” the report observed. For example, architects may play their biggest role at the onset of a project, while demand for developers and QA personnel grow over the course of a project.
Keene said the gig economy approach avoids locking people into specialized projects for long periods of time, making the staffing process more agile. He said when organizations discuss agile approaches, they are typically referring to their delivery processes.
“Most agile projects are only agile in the way they drive their processes,” Keene said. “They are not agile in the way they resource those projects.”
Keene, meanwhile, attributed the increase in project success rates to the peer review function. He said bringing in an on-demand expert to review a project milestone helps avoid the “groupthink” that can derail an IT initiative.
Gigster is based in San Francisco.
SAP elevates partner-developed apps
SAP has revamped SAP App Center, an online marketplace where customers can purchase SAP-based apps developed by partners.
The SAP App Center now features an updated user experience to make it easier for customers to search for offerings based on the underlying SAP product used, certification, publisher and solution type, the vendor said. SAP unveiled the updated SAP App Center alongside several other new partner initiatives at the company’s Global Partner Summit, held online on June 3.
“The [SAP App Center] should be the place to go for all the customers we have. … And it should be the place to go for all of our account executives. We are going to make it easy for our customers to go there, and we are going to run campaigns to enable our customers to find what they need in the app center,” SAP chief partner offer Karl Fahrbach said during the Global Partner Summit event.
The marketplace currently features more than 1,500 partner-created offerings, according to SAP.
On the partner-facing side of the SAP App Center, the company added tools to publish SAP-based offerings and manage and track sales. “We are working very hard on reducing the time that it takes to publish an app on the app center,” Fahrbach said.
SAP said a new initiative, SAP Endorsed Apps, aims to bolster SAP partners’ software businesses by spotlighting partner apps and matching them with potential customers. SAP Endorsed Apps is an invitation-only initiative.
In addition to updating the SAP App Center, the company said it is focused on improving how partners approach SAP implementation projects. To that end, SAP introduced a set of standard processes, tools and reporting aids designed to facilitate implementations. Benefits include grants for educating partners’ consultants, incentives for partners that invest in customer success, and increased investments in partner learning and enablement, SAP said.
Fahrbach also said SAP is opening its pre-sales software demonstration environment to qualifying partners for free. Additionally, on July 1, SAP will offer partners one year of free access to SAP S/4HANA Cloud and Business ByDesign.
Channel partners find allies in backup and DR market
Several channel companies this week disclosed partnerships and distribution deals in the backup and disaster recovery (DR) market.
OffsiteDataSync, a J2 Global company offering DR and backup as a service, rolled out an expanded partnership with Zerto. The Zerto relationships lets OffsiteDataSync provide DRaaS options to a “broad spectrum of businesses,” according to OffsiteDataSync, which also partners with Veeam.
In another move, Otava, a cloud services company based in Ann Arbor, Mich., launched Otava Cloud Backup for Microsoft 365, partnering with Veeam. The Microsoft 365 SaaS offering, available for channel partners, follows the November 2019 launch of Veeam-based backup offerings such as Otava Cloud Connect, Otava-Managed Cloud Backup and Self-Managed Cloud Backup.
Meanwhile, Pax8, a cloud distributor based in Denver, added Acronis Cyber Protect to its roster of offerings in North America. The Acronis Cyber Protect service includes backup, DR, antimalware, cybersecurity and endpoint management tools.
A survey of IT professionals found 24% of businesses adapted to the COVID-19 pandemic without downtime, with 56% reporting two or fewer weeks of downtime. The study from Insight Enterprises, an integrator based in Tempe, Ariz., noted 40% of respondents said they had to develop or retool business resiliency plans in response to COVID-19. Insight also found IT departments are planning to invest in a range of health-related technologies, including smart personal hygiene devices (58%), contactless sensors (36%), infrared thermometers (35%) and thermal cameras (25%). A third of the respondents are looking into an IoT ecosystem that would let them pull together and analyze data gathered from those devices.
Research from Advanced, an application modernization services provider, revealed that about three-quarters of organizations have launched a legacy system modernization project but failed to complete the task. The company pointed to a “a disconnect of priorities between technical and leadership teams” as an obstacle to getting projects over the finish line. Advanced’s 2020 mainframe modernization report also identified a broad push to the cloud: 98% of respondents cited plans to move legacy applications to the cloud this year.
IBM and solutions provider Persistent System are partnering to deploy IBM Cloud Pak offerings within the enterprise segment. Persistent Systems also launched a new IBM Cloud Pak deployment practice for migrating and modernizing IBM workloads within cloud environments.
Distributor Ingram Micro Cloud rolled out the Illuminate program for AWS Partner Network resellers. The Illuminate program provides partner enablement in the forms of coaching, marketing, sales and technical resources.
US Signal, a data center services provider based in Grand Rapids, Mich., said it will expand its cloud and data protection capabilities to include data centers in Oak Brook, Ill., and Indianapolis. The company already offers cloud and data protection in its Grand Rapids, Mich.; Southfield, Mich.; and Detroit data centers. The expanded services are scheduled for availability in July at the Oak Brook data center and in September at the Indianapolis facility.
ActivTrak Inc., a workforce productivity and analytics software company based in Austin, Texas, unveiled its Managed Service Provider Partner Program. The initial group of more than 25 partners, which span North America, South America, Europe and Asia, include Advanced Technology Group, Cloud Synergistics, Cyber Secure, EMD, NST, Nukke, Wahaya and Zinia. The three-tier program offers access to a single pane-of-glass management console, MSP Command Center. The console lets partners log into customer accounts through single sign-on, investigate and address alerts, configure the application and troubleshoot issues within individual accounts, according to the company.
Tanium, a unified endpoint management and security firm based in Emeryville, Calif., formally rolled out its Tanium Partner Advantage program. The program launch follows partnership announcements with NTT, Cloudflare, Okta and vArmour.
Nerdio, a Chicago-based company that provides deployment and management offerings for MSPs, expanded its EMEA presence. The company launched a partnership with Sepago, an IT management consultancy based in Germany. Nerdio also appointed Bas van Kaam as its field CTO for Europe, the Middle East and Africa.
DLT and its parent company, distributor Tech Data, launched an online forum, GovDevSecOpsHub, which focuses on cybersecurity and the application development process in the public sector.
Kimble Applications, a Boston-based professional services automation company, appointed Steve Sharp as its chief operations and finance officer.
High Wire Networks, a cybersecurity service provider based in Batavia, Ill., named Travis Ray as its director of channel sales. Ray will look to build alliances with MSPs around delivering High Wire’s Overwatch Managed Security Platform as a Service offering, the company said.
Market Share is a news roundup published every Friday.
Nearly every system administrator has to deal with scheduled tasks. They are incredibly helpful to do something based on various triggers, but they require a lot of manual effort to configure properly.
The benefit of scheduled tasks is you can build one with a deep level of sophistication with trigger options and various security contexts. But where complexity reigns, configuration errors can arise. When you’re developing these automation scripts, you can create a scheduled task with PowerShell to ease the process. Using PowerShell helps standardize the management and setup work involved with intricate scheduled tasks, which has the added benefit of avoiding the usual errors that stem from manual entry.
Build a scheduled task action
At a minimum, a scheduled task has an action, a trigger and a group of associated settings. Once you create the task, you also need to register it on the system. You need to perform each action separately to create a single scheduled task.
To create the action, use the New-ScheduledTaskAction cmdlet which specifies the command to run. Let’s create an action that gets a little meta and invokes a PowerShell script.
The command below gives an example of invoking the PowerShell engine and passing a script to it using all of the appropriate command line switches to make the script run noninteractively. The script file resides on the machine the scheduled task will be running on.
Next, you need a trigger. You have a several values available, but this task will use a specific time — 3 a.m. — to execute this script once. For a full list of options, check out the New-ScheduledTaskTrigger cmdlethelp page.
$Trigger = New-ScheduledTaskTrigger -Once -At 3am
Next, create the scheduled task using the New-ScheduledTask command. This command requires a value for the Settings parameter, even if you’re not using any special. This is why you run New-ScheduledTaskSettingsSet to create an object to pass in here.
$Settings = New-ScheduledTaskSettingsSet
Create the scheduled task
After assigning all the objects as variables, pass each of these variables to the New-ScheduledTask command to create a scheduled task object.
At this point, you have created a scheduled task object in memory. To add the scheduled task on the computer, you must register the scheduled task using the Register-ScheduledTask cmdlet.
The example below registers a scheduled task to run under a particular username. To run the task under a certain user’s context, you have to provide the password. It’s helpful to look at the documentation for the Register-ScheduledTask command to see all the options to use with this cmdlet.
I have the following 8TB drives for sale, warranty is up on all of them but all working as they should. Reason for sale is I upgraded to 14TB drives.
Seagate ST8000DM002 No Warranty (ended August 2018) Power On Hours Seagate ST8000VN002 No Warranty (ended April 2019) Power On Hours 18579 Seagate ST8000VN002 No Warranty (ended April 2019) Power On Hours 18584
£125.00 each shipped RMSD mainland UK
Have a 6TB WD Red potentially to list once I get around to it.
Sold I have a 14TB Western Digital Red drive for sale, it came out of a external duo drive but it’s a red label with 3 years warranty (will get exact warranty date) Reason for sale is I upgraded all of my NAS drives and this one was left over/not needed, opened but unused.
£280.00 shipped RMSD mainland UK
Sold to alitech £440 Seagate ST8000DM002 No Warranty (ended August 2018) Power On Hours 926 Seagate ST8000DM002 No Warranty (ended August 2018) Power On Hours 939 Western Digital WD80EFZX No Warranty (ended July 2019) Power On Hours 973 Western Digital WD80EFZX No Warranty (ended July 2019) Power On Hours 953
Enterprises are watching the development of the Kubernetes Cluster API project, which they hope will evolve into a declarative multi-cloud deployment standard for container infrastructure.
With a declarative API, developers can describe the desired outcome and the system handles the rest. Kubernetes today requires users to deploy a series of such APIs separately for each cloud provider and on-premises IT environment. This makes it difficult to take a cohesive, consistent approach to spinning up multiple clusters, especially in multi-cloud environments. Existing Kubernetes deployment procedures may also offer so many configuration options that it’s easy for end users to overcomplicate installations.
Enterprises that have taken a declarative, also known as immutable, approach to other layers of the IT infrastructure as they adopt DevOps want to enforce the same kind of simple, repeatable standards for Kubernetes clusters through a standard declarative API. Some IT shops have struggled and failed to implement their own APIs for those purposes, and say the community effort around Kubernetes Cluster API has better potential to achieve those goals than their individual projects.
One such company, German IT services provider Giant Swarm, created its own Kubernetes deployment API in 2017 to automate operations for more than 200 container clusters it manages for customers in multiple public clouds. It used a central Kubernetes management cluster fronted by the RESTful API to connect to Kubernetes Operators within each workload cluster. Eventually, though, Giant Swarm found that system too difficult to maintain as Kubernetes and cloud infrastructures continually changed.
“Managing an additional REST API is cumbersome, especially since users have to learn a new [interface],” said Marcel Müller, platform engineer at Giant Swarm, in an online presentation at a virtual IT conference held by API platform vendor Kong last month. “We had to restructure our API quite often, and sometimes we didn’t have the resources or knowledge to make the right long-term [architectural] decisions.”
Switching between cloud providers proved especially confusing and painful for users, since tooling is not transferable between them, Müller said.
“The conclusion we got to by early 2019 was that community collaboration would be really nice here,” he said. “A Kubernetes [special interest group] would take care of leading this development and ensuring it’s going in the correct direction — thankfully, this had already happened because others faced similar issues and come to the same conclusion.”
Marcel Müller Platform engineer, Giant Swarm
That special interest group (SIG), SIG-Cluster-Lifecycle, was formed in late 2017, and created Cluster API as a means to standardize Kubernetes deployments in multiple infrastructures. That project issued its first alpha release in March 2019, as Müller and his team grew frustrated with their internal project, and Giant Swarm began to track its progress as a potential replacement.
Cluster API installs Kubernetes across clouds using MachineSets, which are similar to the Kubernetes ReplicaSets Giant Swarm already uses. Users can also manage Cluster API through the familiar kubectl command line interface, rather than learning to use a separate RESTful API.
Still, the project is still in an early alpha phase, according to its GitHub page, and therefore changing rapidly; as an experimental project, it isn’t necessarily suited for production use yet. Giant Swarm will also need to transition gradually to Cluster API to ensure the stability of its Kubernetes environment, Müller said.
Cluster API bridges Kubernetes multi-cloud gap
Cluster API is an open source alternative to centralized Kubernetes control planes also offered by several IT vendors, such as Red Hat OpenShift, Rancher and VMware Tanzu. Some enterprises may prefer to let a vendor tackle the API integration problem and leave support to them as well. In either case, the underlying problem at hand is the same — as enterprise deployments expand and mature, they need to control and automate multiple Kubernetes clusters in multi-cloud environments.
For some users, multiple clusters are necessary to keep workloads portable across multiple infrastructure providers; others prefer to manage multiple clusters rather than deal with challenges that can emerge in Kubernetes networking and multi-tenant security at large scale. The core Kubernetes framework does not address this.
“[Users] need a ‘meta control plane’ because one doesn’t just run a single Kubernetes cluster,” said John Mitchell, an independent digital transformation consultant in San Francisco. “You end up needing to run multiple [clusters] for various reasons, so you need to be able to control and automate that.”
Before vendor products and Cluster API emerged, many early container adopters created their own tools similar to Giant Swarm’s internal API. In Mitchell’s previous role at SAP Ariba, the company created a project called Cobalt to build, deploy and operate application code on bare metal, AWS, Google Cloud and Kubernetes.
Mitchell isn’t yet convinced that Cluster API will be the winning approach for the rest of the industry, but it’s at least in the running.
“Somebody in the Kubernetes ecosystem will muddle their way to something that mostly works,” he said. “It might be Cluster API.”
SAP’s Concur Technologies subsidiary, meanwhile, created Scipian to watch for changes in Kubernetes custom resource definitions (CRDs) made as apps are updated. Scipian then launches Terraform jobs to automatically create, update and destroy Kubernetes infrastructure in response to those changes, so that Concur ops staff don’t have to manage those tasks manually. Scipian’s Terraform modules work well, but Cluster API might be a simpler mechanism once it’s integrated into the tool, said Dale Ragan, principal software design engineer at the expense management SaaS provider based in Bellevue, Wash.
“Terraform is very amenable to whatever you need it to do,” Ragan said. “But it can be almost too flexible for somebody without in-depth knowledge around infrastructure — you can create a network, for example, but did you create it in a secure way?”
With Cluster API, Ragan’s team may be able to enforce Kubernetes deployment standards more easily, without requiring users to have a background in the underlying toolset.
“We created a Terraform controller so we can run existing modules using kubectl [with Cluster API],” Ragan said. “As we progress further, we’re going to use CRDs to replace those modules … as a way to create infrastructure in ‘T-shirt sizes’ instead of talking about [technical details].”
As medical researchers around the world race to find answers to the COVID-19 pandemic, they need to gather as much clinical data as possible for analysis.
A key challenge many researchers face with clinical data is privacy and the mandate to protect confidential patient information. One way to overcome that privacy challenge is by using synthetic data, an approach that creates data that is not linked to personally identifiable information. Rather than encrypting or attempting to anonymize data to protect privacy, synthetic data represents a different approach that can be useful for medical researchers.
With synthetic data there are no real people, rather the data is a synthetic copy that is statistically comparable, but entirely composed of fictional patients, explained Ziv Ofek, founder and CEO of health IT vendor MDClone, based in Beer Sheba, Israel.
Other popular methods of protecting patient privacy, such as anonymization and encryption, aim to balance patient privacy and data utility. However, a privacy risk still remains because embedded within the data, even after diligent attempts to protect privacy, are real people, Ofek argued.
“There are no real people embedded within the synthetic data,” Ofek said. “Instead, the data is a statistical representation of the original and the risk of reidentification is no longer relevant, even though it may appear as real people and can be analyzed as if it were and yielding the same conclusions.”
Synthetic data in the real world
MDClone’s synthetic data technology is being used by Sheba Medical Center in Tel Aviv as part of its COVID-19 research.
Eyal Zimlichman, M.D.Deputy director general, Sheba Medical Center
The MDClone system is critical to his organization’s data efforts to gain more insights into COVID-19, the disease caused by the novel coronavirus, said Eyal Zimlichman, M.D., deputy director general, chief medical officer and chief innovation officer at Sheba Medical.
By regulation, synthetic data is not considered patient data and therefore is not subject to the IRB process. As opposed to real patient data, Ofek noted that synthetic data can be accessed freely by researchers, so long as the institution agrees to provide access.
“Synthetic data provides an opportunity to get quick answers to data-related questions without the need for an IRB approval,”Zimlichman said. “It also allows users to work on the data in their own environment, something we do not allow with real data.”
Zimlichman added that data science groups both within and outside the hospital are using the MDClone system to help predict COVID-19 patient outcomes, as well as to aid in determining a course of action for therapy.
Synthetic data accelerates time to insight
The MDClone platform includes a data engine for collecting and organizing patient data, the discovery studio for analysis and the Synthetic Data Engine for creating data. The vendor on April 14 released the MDClone Pandemic Response Package, which includes a predefined set of visualizations and analyses that are COVID-19-specific. The engine enables clients and networks to ask questions of COVID-19-related data and generate meaningful analysis, including cohort and population-level insights.
In the event a client wants to use their data to share, compare and collaborate with others, they can convert their original data into a synthetic copy for shared review and insight development.
“A synthetic collaboration model allows for that conversation to take place with data flows and analysis performed across both systems without patient privacy and security risks,” Ofek said.
Ofek added that the synthetic model and platform access capability enables clients to invite research and collaboration partners into their data environment rather than simply sharing files on demand. With MDClone, the client’s research and collaboration partners are able to log in to the MDClone data lake and then get access to the data and exploration tools with synthetic output.
“In the context of the pandemic, organizations leveraging the platform can offer partners unfettered synthetic access to accelerate exploration into new avenues for treatment,” Ofek said. “Idea generation and data reviews that enable real-world analysis is our pathway to finding and broadcasting the best healthcare professionals can offer as we combat the disease.”
When the CEO realizes they deleted a vital email thread three weeks ago, email recovery becomes suddenly becomes an urgent task. Sure, you can look in the Deleted Items folder in Outlook, but beyond that, how can you recover what has undergone “permanent” deletion? In this article, we review how you can save the day by bringing supposedly unrecoverable email back from the great beyond.
Before we continue, we know that for all Microsoft 365 admins security is a priority. And in the current climate of COVID-19, it’s well documented how hackers are working around the clock to exploit vulnerabilities. As such, we assembled two Microsoft experts to discuss the critical security features in Microsoft 365 you should be using right now in a free webinar on May 27. Don’t miss out on this must-attend event – save your seat now!
Now onto saving your emails!
Deleted Email Recovery in Microsoft And Office 365
Email Recovery for Outlook in Exchange Online through Microsoft and Office can be as simple as dragging and dropping the wayward email from the Deleted Items folder to your Inbox. But what do you do when you can’t find the email you want to recover?
First, let’s look at how email recovery is structured in Microsoft 365. There are few more layers here than you might think! In Microsoft 365, deleted email can be in one of three states: Deleted, Soft-Deleted, or Hard-Deleted. The way you recover email and how long you have to do so depends on the email’s delete status and the applicable retention policy.
Let’s walk through the following graphic and talk about how email gets from one state to another, the default policies, how to recover deleted email in each state, and a few tips along the way.
Items vs. Email
Outlook is all about email yet also has tasks, contacts, calendar events, and other types of information. For example, you can delete calendar entries and may be called on to recover them, just like email. For this reason, the folder for deleted content is called “Deleted Items.” Also, when discussing deletions and recovery, it is common to refer to “items” rather than limiting the discussion to just email.
Various rules control the retention period for items in the different states of deletion. A policy is an automatically applied action that enforces a rule related to services. Microsoft 365 has hundreds of policies you can tweak to suit your requirements. See Overview of Retention policies for more information.
‘Deleted Items’ Email
When you press the Delete key on an email in Outlook, it’s moved to the Deleted Items folder. That email is now in the “Deleted” state, which simply means it moved to the Deleted Items folder. How long does Outlook retain deleted email? By default – forever! You can recover your deleted mail with just a drag and drop to your Inbox. Done!
If you can’t locate the email in the Deleted Items folder, double-check that you have the Deleted Items folder selected, then scroll to the bottom of the email list. Look for the following message:
If you see the above message, your cache settings may be keeping only part of the content in Outlook and rest in the cloud. The cache helps to keep mailbox sizes lower on your hard drive, which in turn speeds up search and load times. Click on the link to download the missing messages.
But I Didn’t Delete It!
If you find content in the Deleted Items and are sure you did not delete it, you may be right! Administrators can set Microsoft 365 policy to delete old Inbox content automatically.
Mail can ‘disappear’ another way. Some companies enable a personal archive mailbox for users. When enabled, by default, any mail two years or older will “disappear” from your Inbox and the Deleted Items folder. However, there is no need to worry. While apparently missing, the email has simply moved to the Archives Inbox. A personal Archives Inbox shows up as a stand-alone mailbox in Outlook, as shown below.
As a result, it’s a good idea to search the Archives Inbox, if it is present when searching for older messages.
Another setting to check is one that deletes email when Outlook is closed. Access this setting in Outlook by clicking “File,” then “Options,” and finally “Advanced” to display this window:
If enabled, Outlook empties the Deleted Items when closed. The deleted email then moves to the ‘soft-delete’ state, which is covered next. Keep in mind that with this setting, all emails will be permanently deleted after 28 days
The next stage in the process is Soft-Deleted. Soft-Deleted email is in the Deleted-Items folder but is still easily recovered. At a technical level, the mail is deleted locally from Outlook and placed in the Exchange Online folder named Deletions, which is a sub-folder of Recoverable Items. Any content in Recoverable Items folder in Exchange Online is, by definition, considered soft-deleted.
There are three ways to soft-delete mail or other Outlook items.
Delete an item already in the Deleted Items folder. When you manually delete something that is already in the Deleted Items folder, the item is soft-deleted. Any process, manual or otherwise that deletes content from this folder results in a ‘soft-delete’
Pressing Shift + Delete on an email in your Outlook Inbox will bring up a dialog box asking if you wish to “permanently” delete the email. Clicking Yes will remove the email from the Deleted-Items folder but only perform a soft-delete. You can still recover the item if you do so within the 14 day retention period.
The final way items can be soft-deleted is by using Outlook policies or rules. By default, there are no policies that will automatically remove mail from the Deleted-Items folder in Outlook. However, users can create rules that ‘permanently’ (soft-delete) email. If you’re troubleshooting missing email, have the user check for such rules as shown below. You can click Rules on the Home menu and examine any created rules in the Rules Wizard shown below.
Note that the caution is a bit misleading as the rule’s action will soft-delete the email, which, as already stated, is not an immediate permanent deletion.
Recovering soft-deleted mail
You can recover soft-deleted mail directly in Outlook. Be sure the Deleted Items folder is selected, then look for “Recover items recently removed from this folder“ at the top of the mail column, or the “Recover Deleted Items from Server” action on the Home menu bar.
Clicking on the recover items link opens the Recover Deleted Items window.
Click on the items you want to recover or Select All, and click OK.
NOTE: The recovered email returns to your Deleted Items folder. Be sure to move it into your Inbox.
If the email you’re looking for is not listed, it could have moved to the next stage: ‘Hard-Deleted.’
While users can recover soft-deleted email, Administrators can also recover soft-deleted email on their behalf using the ‘Hard-Deleted’ email recovery process described next (which works for both hard and soft deletions). Also, Microsoft has created two PowerShell commands very useful in this process for those who would rather script the tasks. You can use the Get-RecoverableItems and Restore-RecoverableItems cmdlets to search and restore soft-deleted email.
The next stage for deletion is ‘Hard Delete.’ Technically, items are hard deleted when items moved from the Recoverable folder to the Purges folder in Exchange online. Administrators can still recover items in the folder with the recovery period set by policy which ranges from 14 (the default) to 30 (the maximum). You can extend the retention beyond 30 days by placing legal or litigation hold on the item or mailbox.
How items become Hard-Deleted
There are two ways content becomes hard-deleted.
By policy, soft-deleted email is moved to the hard-deleted stage when the retention period expires.
Users can hard-delete mail manually by selecting the Purge option in the Recover Deleted Items window shown above. (Again, choosing to ‘permanently delete’ mail with Shift + Del, results in a soft-delete, not a hard-delete.)
Recovering Hard-Deleted Mail
Once email enters the hard-delete stage, users can no longer recover the content. Only service administrators with the proper privileges can initiate recovery, and no administrators have those privileges by default, not even the global admin. The global admin does have the right to assign privileges so that they can give themselves (or others) the necessary rights. Privacy is a concern here since administrators with these privileges can search and export a user’s email.
Microsoft’s online documentation Recover deleted items in a user’s mailbox details the step-by-step instructions for recovering hard-deleted content. The process is a bit messy compared to other administrative tasks. As an overview, the administrator will:
Assign the required permissions
Search the Inbox for the missing email
Copy the results to a Discovery mailbox where you can view mail in the Purged folder (optional).
Export the results to a PST file.
Import the PST to Outlook on the user’s system and locate the missing email in the Purged folder
Last Chance Recovery
Once hard-deleted items are purged, they are no longer discoverable by any method by users or administrators. You should consider the recovery of such content as unlikely. That said, if the email you are looking for is not recoverable by any of the above methods, you can open a ticket with Microsoft 365 Support. In some circumstances, they may be able to find the email that has been purged but not yet overwritten. They may or may not be willing to look for the email, but it can’t hurt to ask, and it has happened.
What about using Outlook to backup email?
Outlook does allow a user to export email to a PST file. To do this, click “File” in the Outlook main menu, then “Import & Export” as shown below.
You can specify what you want to export and even protect the file with a password.
While useful from time to time, a backup plan that depends on users manually exporting content to a local file doesn’t scale and isn’t reliable. Consequently, don’t rely on this as a possible backup and recovery solution.
After reading this, you may be thinking, “isn’t there an easier way?” A service like Altaro Office 365 Backup allows you to recover from point-in-time snapshots of an inbox or other Microsoft 365 content. Having a service like this when you get that urgent call to recover a mail from a month ago can be a lifesaver.
Users can recover most deleted email without administrator intervention. Often, deleted email simply sits in the Deleted folder until manually cleared. When that occurs, email enters the ‘soft-deleted stage,’ and is easily restored by a user within 14-days. After this period, the item enters the ‘hard-deleted’ state. A service administrator can recover hard-deleted items within the recovery window. After the hard-deleted state, email should be considered uncoverable. Policies can be applied to extend the retention times of deleted mail in any state. While administrators can go far with the web-based administration tools, the entire recovery process can be scripted with PowerShell to customize and scale larger projects or provide granular discovery. It is always a great idea to use a backup solution designed for Microsoft 365, such as Altaro Office 365 Backup.
Finally, if you haven’t done so already, remember to save your seat on our upcoming must-attend webinar for all Microsoft 365 admins:
Is Your Office 365 Data Secure?
Did you know Microsoft does not back up Office 365 data? Most people assume their emails, contacts and calendar events are saved somewhere but they’re not. Secure your Office 365 data today using Altaro Office 365 Backup – the reliable and cost-effective mailbox backup, recovery and backup storage solution for companies and MSPs.
Data centers have become an important part of our data-driven world. They act as a repository for servers, storage systems, routers and all manner of IT equipment and can stretch as large as an entire building — especially in an age of AI that requires advanced computing
Establishing how much power these data centers utilize and the environmental impact they have can be difficult, but according to a recent paper in Science Magazine, the entire data center industry in 2018 utilized an estimated 205 TWh. This roughly translates to 1% of global electricity consumption.
Enterprises that utilize large data centers can use AI, advancements in storage capacity and more efficient servers to mitigate the power required for the necessary expansion of data centers.
The rise of the data center
Collecting and storing data is fundamental to business operation, and while having your own infrastructure can be costly and challenging, having unlimited access to this information is crucial to advancements.
Provoking the most coverage because of their massive size, data centers of tech giants like Google and Amazon often require the same amount of energy as small towns. But there is more behind these numbers, according to Eric Masanet, associate professor of Mechanical Engineering and Chemical and Biological Engineering at Northwestern University and coauthor of the aforementioned article.
The last detailed estimates of global data center energy use appeared in 2011, Masanet said.
Since that time, Masanet said, there have been many claims that the world’s data centers were requiring more and more energy. This has given policymakers and the public the impression that data centers’ energy use and related carbon emissions have become a problem.
Counter to this, Masanet and his colleagues’ studies on the evolution of storage, server and network technology found that efficiency gains have significantly mitigated the growth in energy usage in this area. From 2010 to 2018, compute instances went up by 550%, while energy usage increased just 6% in the same time frame. While data center energy usage is on the rise, it has been curbed dramatically through the development of different strategies.
Getting a step ahead of the data center footprint
The workings behind mediated energy increases are all tied to advancements in technology. Servers have become more efficient, and the partitioning of servers through server virtualization has curbed the energy required for the rapid growth of compute instances.
A similar trend is noticeable in the storage of data. While the demand has significantly increased, the combination of storage-drive efficiencies and densities has limited total increase of global storage energy usage to just threefold. To further curb the rising desire for more data and therefore the rising energy costs and environmental impact, companies integrating AI when designing their data centers.
“You certainly could leverage AI to analyze utility consumption data and optimize cost,” said Scott Laliberte, a managing director with Protiviti and leader of the firm’s Emerging Technologies practice.
“The key for that would be having the right data available and developing and training the model to optimize the cost.”
By having AI collect data on their data centers and optimizing the energy usage, these companies can help mitigate the power costs, especially concerning cooling, one of the more costly and concerning of the processes within data centers.
“The strategy changed a little bit — like trying to build data centers below ground or trying to be near water resources,” said Juan José López Murphy, Technical Director and Data Science Practice Lead at Globant, a digitally native services company.
But cooling these data centers has been such a large part of their energy usage that companies have had to be creative. Companies like AWS and GCP are trying new locations like the middle of the desert or underground and trying to develop cooling systems that are based on water and not just air, Murphy said.
Google utilizes an algorithm that manages cooling at some of their data centers that can learn from data gathered and limit energy consumption by adjusting cooling configurations.
For the time being, both the demand for data centers and their efficiency has grown. Now the advancement of servers and storage drives as well as the implementation of AI in the building process has almost matched the growing energy demand. This may not continue, however.
“Historical efficiency gains may not be able to outpace rapidly rising demand for data center services in the not-too-distant future,” Masanet said. “Clearly greater attention to data center energy use is warranted.”
The increased efficiencies have done well to stem the tide of demand, but the future remains uncertain for data center’s energy requirements.
Databases have long been used for transactional and analytics use cases, but they also have practical utility to help enablemachine learningcapabilities. After all, machine learning is all about deriving insights from data, which is often stored inside a database.
San Francisco-based database vendorSplice Machineis taking an integrated approach to enabling machine learning with its eponymous database. Splice Machine is a distributed SQLrelational database management systemthat includes machine learning capabilities as part of the overall platform.
Splice Machine 3.0 became generally available on March 3, bringing with it updated machine learning capabilities. It also hasnew Kubernetescloud native-based model for cloud deployment and enhanced replication features.
In this Q&A, Monte Zweben,co-founder and CEOof Splice Machine, discusses the intersection of machine learning and databases and provides insight into the big changes that have occurred in the data landscape in recent years.
How do you integrate machine learning capabilities with a database?
Monte Zweben: The data platform itself has tables, rows and schema. The machine learning manager that we have native to the database has notebooks for developing models,Pythonfor manipulating the data, algorithms that allow you to model and model workflow management that allows you to track the metadata on models as they go through their experimentation process. And finally we have in-database deployment.
So as an example, imagine a data scientist working in SpliceMachine working in the insurance industry. They have an application for claims processing and they are building out models inside Splice Machine to predict claims fraud. There’s a function in SpliceMachine called deploy, and what it will do is take a table and a model to generate database code. The deploy function builds a trigger on the database table that tells the table to call a stored procedure that has the model in it for every new record that comes in the table.
So what does this mean in plain English? Let’s say in the claims table, every time new claims would come in, the system would automatically trigger, grab those claims, run the model that predicts claim cause and outputs those predictions in another table. And now all of a sudden, you have real-time, in-the-moment machine learning that is detecting claim fraud on first notice of loss.
What does distributed SQL mean to you?
Zweben: So at its heart, it’s about sharing data across multiple nodes. That provides you the ability to parallelize computation and gain elastic scalability. That is the most important distributed attribute of Splice Machine.
In our new 3.0 release, we just added distributed replication. It’s another element of distribution where you have secondary Splice Machine instances in geo-replicated areas, to handle failover for disaster recovery.
What’s new in Splice Machine 3.0?
Zweben: We moved our cloud stack for SpliceMachines from an oldMesosarchitecture to Kubernetes. Now our container-based architecture is all Kubernetes, and that has given us the opportunity to enable the separation of storage and compute. You literally can pause Splice Machine clusters and turn them back on. This is a great utility for consumption based usage of databases.
Along with our upgrade to Kubernetes, we also upgraded our machine learning manager from an older notebook technology calledZeppelinto a newer notebook technology that has really gained momentum in the marketplace, as much as Kubernetes has in the DevOps world.Jupyternotebooks have taken off in the data science space.
We’ve also enhanced our workflow management tool calledmlflow, which is an open source tool that originated with Databricks and we’re part of that community. Mlflow allows data scientists to track their experiments and has that record of metadata available for governance.
What’s your view on open source and the risk of a big cloud vendor cannibalizing open source database technology?
Zweben: We do compose many different open source projects into a seamless and highly performant integration. Our secret sauce is how we put these things together at a very low level, with transactional integrity, to enable a single integrated system. This composition that we put together is open source, so that all of the pieces of our data platform are available in our open source repository, and people can see the source code right now.
I’m intensely worried about cloud cannibalization. I switched to anAGPLlicense specifically to protect against cannibalization by cloud vendors.
On the other hand, we believe we’re moving up the stack. If you look at our machine learning package, and how it’s so inextricably linked with the database, and the reference applications that we have in different segments, we’re going to be delivering more and more higher-level application functionality.
What are some of the biggest changes you’ve seen in the data landscape over the seven years you’ve been running Splice Machine?
Zweben: With the first generation of big data, it was all aboutdata lakes, and let’s just get all the data the company has intoone repository.Unfortunately, that has proven time and time again, at company after company, to just be data swamps.
Data repositories work, they’re scalable, but they don’t have anyone using the data, and this was a mistake for several reasons.
Monte ZwebenCo-founder and CEO, Splice Machine
Instead of thinking about storing the data,companiesshould think about how to use thedata. Start with the application and how you are going to make the application leverage new data sources.
The second reason why this was a mistake was organizationally, because the data scientists who know AI were all centralized in one data science group, away from the application. They are not the subject matter experts for the application.
When you focus on the application and retrofit the application to make it smart and inject AI, you can get a multidisciplinary team. You have app developers, architects, subject-matter experts, data engineers and data scientists, all working together on one purpose. That is a radically more effective and productive organizational structure for modernizing applications with AI.