Tag Archives: about

For Sale – Budget Gaming PC (Collection, Nottingham)

I’m selling my pc I built in 2015. It’s been wiped to factory reset. Not sure about the operating system as when I wiped the pc it booted to windows. You may need a code (can find one for few £ online) or may not.

The cable management is very poor, but doesn’t affect performance.

As components can be damaged easily in transit, and because I lack the specific packaging to keep it safe, This will be collection only

Specs:
Asus h81m-plus motherboard (micro ATX)
Intel core i5 4460 (quad core)
Asus strix Nvidia gtx 960 GPU
Cooler master cm storm enforcer case (front glows red) also have 200mm red LED ceiling fan
Samsung 840 pro ssd 240gb
Seagate 2tb hard drive
Novatech 700w power supply
8gb ram (single channel) I believe it’s hyperx black
Lg blu ray drive

Comes with power cable

Go to Original Article
Author:

Gartner Names Microsoft a Leader in the 2019 Enterprise Information Archiving (EIA) Magic Quadrant – Microsoft Security

We often hear from customers about the explosion of data, and the challenge this presents for organizations in remaining compliant and protecting their information. We’ve invested in capabilities across the landscape of information protection and information governance, inclusive of archiving, retention, eDiscovery and communications supervision. In Gartner’s annual Magic Quadrant for Enterprise Information Archiving (EIA), Microsoft was named a Leader again in 2019.

According to Gartner, “Leaders have the highest combined measures of Ability to Execute and Completeness of Vision. They may have the most comprehensive and scalable products. In terms of vision, they are perceived to be thought leaders, with well-articulated plans for ease of use, product breadth and how to address scalability.” We believe this recognition represents our ability to provide best-in-class protection and deliver on innovations that keep pace with today’s compliance needs.

This recognition comes at a great point in our product journey. We are continuing to invest in solutions that are integrated into Office 365 and address information protection and information governance needs of customers. Earlier this month, at our Ignite 2019 conference, we announced updates to our compliance portfolio including new data connectors, machine learning powered governance, retention, discovery and supervision – and innovative capabilities such as threading Microsoft Teams or Yammer messages into conversations, allowing you to efficiently review and export complete dialogues with context, not just individual messages. In customer conversations, many of them say these are the types of advancements that are helping them be more efficient with their compliance requirements, without impacting end-user productivity.

Learn more

Read the complimentary report for the analysis behind Microsoft’s position as a Leader.

For more information about our Information Archiving solution, visit our website and stay up to date with our blog.

Gartner Magic Quadrant for Enterprise Information Archiving, Julian Tirsu, Michael Hoeck, 20 November 2019.

*This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally, and is used herein with permission. All rights reserved.

Go to Original Article
Author: Steve Clarke

Kasten backup aims for secure Kubernetes protection

People often talk about Kubernetes “Day 1,” when you get the platform up and running. Now Kasten wants to help with “Day 2.”

Kasten’s K10 is a data management and backup platform for Kubernetes. The latest release, K10 2.0, focuses on security and simplicity.

K10 2.0 includes support for Kubernetes authentication, role-based access control, OpenID Connect, AWS Identity and Access Management roles, customer-managed keys, and integrated encryption of artifacts at rest and in flight.

“Once you put data into storage, the Day 2 operations are critical,” said Krishnan Subramanian, chief research advisor at Rishidot Research. “Day 2 is as critical as Day 1.”

Day 2 — which includes data protection, mobility, backup and restore, and disaster recovery — is becoming a pain point for Kubernetes users, Kasten CEO Niraj Tolia said.

“In 2.0, we are focused on making Kubernetes backup easy and secure,” Tolia said.

Other features the new Kasten backup software offers, which became generally available earlier in November, include a Kubernetes-native API, auto-discovery of the application environment, policy-driven operations, multi-tenancy support, and advanced logging and monitoring. The Kasten backup enables teams to operate their environments, while supporting developers’ ability to use tools of their choice, according to the vendor.

Kasten K10 dashboard screenshot
Kasten K10 provides data management and backup for Kubernetes.

Kasten backup eyes market opportunity

Kasten, which launched its original product in December 2017, generally releases an update to its customers every two weeks. A typical update that’s not as major as 2.0 typically has bug fixes, new features and increased depth in current features. Tolia said there were 55 releases between 1.0 and 2.0.

Day 2 is as critical as Day 1.
Krishnan SubramanianFounder and chief research advisor, Rishidot Research

Backup for container storage has become a hot trend in data protection. Kubernetes specifically is an open source system used to manage containers across private, public and hybrid cloud environments. Kubernetes can be used to manage microservice architectures and is deployable on most cloud providers.

“Everyone’s waking up to the fact that this is going to be the next VMware,” as in, the next infrastructure of choice, Tolia said.

Kubernetes backup products are popping up, but it looks like Kasten is a bit ahead of its time, Rishidot’s Subramanian said. He said he is seeing more enterprises using Kubernetes in production, for example, in moving legacy workloads to the platform, and that makes backup a critical element.

“Kubernetes is just starting to take off,” Subramanian said.

Kubernetes backup “has really taken off in the last two or three quarters,” Tolia said.

Subramanian said he is starting to see legacy vendors such as Dell EMC and NetApp tackling Kubernetes backup, as well as smaller vendors such as Portworx and Robin. He said Kasten had needed stronger security but caught up with K10 2.0. Down the road, he said he will look for Kasten to improve its governance and analytics.

Tolia said Kasten backup stands out because it’s “purpose-built for Kubernetes” and extends into multilayered data management.

In August, Kasten, which is based in Los Altos, Calif., closed a $14 million Series A funding round, led by Insight Partners. Tolia did not give Kasten’s customer count but said it has deployments across multiple continents.

Go to Original Article
Author:

Why it’s time to rethink the DevOps vs. ITIL debate

Historically, ITIL and DevOps have made poor bedfellows. DevOps is about speed and enablement, while ITIL, at least traditionally, is a far more proscriptive framework that places considerable delays into a DevOps workflow.

This isn’t to say that DevOps is perfect; many organizations have found — to their cost — that an open DevOps approach has led to major issues, such as relatively uncontrolled code making its way from development to production.

However, DevOps has matured — as has the ITIL IT service management framework. And while the two technologies are not yet a perfect match, it’s time to stop thinking in terms of DevOps vs. ITIL, and instead consider DevOps and ITIL.

Where DevOps and ITIL intersect

DevOps shops must have the right checks and balances in place. They must codify processes that enable audits; feed errors back to developers for a fix; and create a feedback loop from end users to development and operations teams.

The latest version of the ITIL framework — ITIL 4 — is value-focused, which provides a solid foundation for these processes. And as DevOps continues to evolve, ITIL practices could help IT teams create idempotent environments.

Idempotency is the means by which an entity — such as an end user, developer, IT operations professional, or even an AI or machine learning engine — defines the desired outcome of any given change. Underlying tools then use scripts to create and provision packages into an operational environment that will achieve that desired outcome. This approach requires continuous monitoring and feedback loops to ensure that the outcome is not only achieved, but is maintained as the platform changes as well.

ITIL 4 embraces these concepts. The framework’s service value chain promotes an interconnected set of activities that deal with service creation, delivery and continuous improvement. It’s complementary to the continuous development and continuous delivery practices in DevOps. Yet, there is a key difference between DevOps vs. ITIL here: The former emphasizes the under-the-hood, technical ways to perform these tasks, through the use of development tools and hard processes, while the latter focuses more on ensuring tasks occur in a controlled and auditable manner.

ITIL’s service value system overlies its service value chain. This broad system embodies how an organization works from opportunity or end-user demand to value delivery.

ITIL service value system

DevOps vs. ITIL for the business

At the time of publication, ITIL’s service value system far outstrips the current state of DevOps — specifically, the state of BizDevOps. In this DevOps model, business drivers shape how developers work, with the goal of meeting the needs of users who directly affect the organization’s bottom line. There has been some progress around BizDevOps, but not enough to make it a reality across enterprise IT shops. ITIL’s service value system could provide a layer on top of existing DevOps environments to enable this much-needed capability.

The majority of open source DevOps tools address the down-and-dirty technology needs of developers and operations staff, although some commercial systems also try to target the business. But these systems, to date, have been met with limited success, in part because IT departments, not the business, largely procure DevOps tools. In addition, IT teams still suffer from the perception of being somewhat removed from the rest of the organization, and so are loath to roll out and support the business front end of such tools.

ITIL, on the other hand, is nominally a business process tool — particularly under ITIL 4. It is incumbent on Axelos, the organization behind the ITIL framework, to make an effort to push ITIL to the business, as a means to provide the necessary controls over the end-to-end value chain across an organization.

It will be interesting to see if tool vendors and open source developers start to focus less on comparisons of DevOps vs. ITIL, and instead view the latter as a non-competitive framework that creates a standardized platform on which they can work. If DevOps teams and vendors can fold ITIL concepts and practices into their environments, everyone could win.

Go to Original Article
Author:

Wanted – MacBook Pro 15”

Hi
I am thinking about selling my MacBook Pro
Apple Macbook Pro 15″
2018 A1990
2.90GHz i9, 32GB, 1TB SSD
Retina, Touch Bar

This is in Excellent Condition with Box, Lead and Charger
Barely a sign of use.
Cycle count of 12
Space Grey, Touch Bar and Touch ID
15.4″ Retina LED, 2.9GHz 6 Core Intel Core i9
32GB of 2400Mhz DDR4 SD RAM
1TB PCle based SSD, Radeon Pro Vega 20
Mojave macOS

Has Apple Care+ Until 2022 April

£3k

Go to Original Article
Author:

Wanted – MacBook Pro 15”

Hi
I am thinking about selling my MacBook Pro
Apple Macbook Pro 15″
2018 A1990
2.90GHz i9, 32GB, 1TB SSD
Retina, Touch Bar

This is in Excellent Condition with Box, Lead and Charger
Barely a sign of use.
Cycle count of 12
Space Grey, Touch Bar and Touch ID
15.4″ Retina LED, 2.9GHz 6 Core Intel Core i9
32GB of 2400Mhz DDR4 SD RAM
1TB PCle based SSD, Radeon Pro Vega 20
Mojave macOS

Has Apple Care+ Until 2022 April

£3k

Go to Original Article
Author:

How Microsoft re-envisioned the data warehouse with Azure Synapse Analytics

About four years ago, the Microsoft Azure team began to notice a big problem troubling many of its customers. A mass migration to the cloud was in full swing, as enterprises signed up by the thousands to reap the benefits of flexible, largescale computing and data storage. But the next iteration of that tech revolution, in which companies would use their growing stores of data to get more tangible business benefits, had stalled.

Technology providers, including Microsoft, have built a variety of systems to collect, retrieve and analyze enormous troves of information that would uncover market trends and insights, paving the way toward a new era of improved customer service, innovation and efficiency.

But those systems were built independently by different engineering teams and sold as individual products and services. They weren’t designed to connect with one another, and customers would have to learn how to operate them separately, wasting time, money and precious IT talent.

“Instead of trying to add more features to each of our services, we decided to take a step back and figure out how to bring their core capabilities together to make it easy for customers to collect and analyze all of their increasingly diverse data, to break down data silos and work together more collaboratively,” said Raghu Ramakrishnan, Microsoft’s chief technology officer for data.

At its Ignite conference this week in Orlando, Florida, Microsoft announced the end result of a yearslong effort to address the problem: Azure Synapse Analytics, a new service that merges the capabilities of Azure SQL Data Warehouse with new enhancements such as on-demand query as a service.

Microsoft said this new offering will help customers put their data to work much more quickly, productively and securely by pulling together insights from all data sources, data warehouses and big data analytics systems. And, the company said, with deeper integration between Power BI and Azure Machine Learning, Azure Synapse Analytics can reduce the time required to process and share that data, speeding up the insights that businesses can glean.

What’s more, it will allow many more businesses to take advantage of game-changing technologies like data analytics and artificial intelligence, which are helping scientists to better predict the weather, search engines to better understand people’s intent and workers to more easily handle mundane tasks.

This newest effort to break down data silos also builds on other Microsoft projects, such as the Open Data Initiative and Azure Data Share, which allows you to share data from multiple sources and even other organizations.

Microsoft said Azure Synapse Analytics is also designed to support the increasingly popular DevOps strategy, in which development and operations staff collaborate more closely to create and implement services that work better throughout their lifecycles.

YouTube Video

A learning process

Azure Synapse Analytics is the result of a lot of work, and a little trial and error.

At first, Ramakrishnan said, the team developed highlevel guidelines showing customers how to glue the systems together themselves. But they quickly realized that was too much to ask.

“That required a lot of expertise in the nitty gritty of our platforms,” Ramakrishnan said. Customers made it overwhelmingly clear that we needed to do better.”

So, the company went back to the drawing board and spent an additional two years revamping the heart of its data business, Azure SQL Data Warehouse, which lets customers build, test, deploy and manage applications and services in the cloud.

A breakthrough came when the company realized that customers need to analyze all their data in a single service, without having to copy terabytes of information across various systems to use different analytic capabilities – as has traditionally been the case with data warehouses and data lakes.

With the new offering, customers can use their data analytics engine of choice, such as Apache Spark or SQL, on all their data. That’s true whether it’s structured data, such as rows of numbers on spreadsheets, or unstructured data, such as a collection of social media posts.

This project was risky. It involved deep technical surgery: completely rewriting the guts of the SQL query processing engine to optimize it for the cloud and make it capable of instantly handling big bursts of work as well as very large and diverse datasets.

It also required unprecedented integration among several teams within Microsoft, some of whom would have to make hard choices. Established plans had to be scrapped. Resources earmarked for new features would be redirected to help make the entire system work better.

“In the beginning, the conversations were often heated. But as we got into the flow of it, they became easier. We began to come together,” Ramakrishnan said.

Microsoft also had to make sure that the product would work for any company, regardless of employees’ technical expertise.

“Most companies can’t afford to hire teams of 20 people to drive data projects and wire together multiple systems. There aren’t even enough skilled people out there to do all that work,” said Daniel Yu, director of product marketing for Azure Data and Artificial Intelligence.

Making it easy for customers

Customers can bring together various sources of data into a single feed with Azure Synapse Analytics Studio, a console – or single pane of glass that will allow a business professional with minimal technical expertise to locate and collect data from multiple sources like sales, supply chain, finance and product development. They can then choose how and where to store that data, and they can use it to create reports through Microsoft’s popular Power BI analytics service.

In a matter of hours, Azure Synapse will deliver useful business insights that used to take days or even weeks and months, said Rohan Kumar, corporate vice president for Azure Data.

“Let’s say an executive wants a detailed report on sales performance in the eastern U.S. over the last six months,” Kumar said. Today, a data engineer has to do a lot of work to find where that data is stored and write a lot of brittle code to tie various services together. They might even have to bring in a systems integrator partner. With Azure Synapse, there’s no code required. It’s a much more intuitive experience.”

The complexity of the technical problems Azure Synapse addressed would be hard to overstate. Microsoft had to meld multiple independent components into one coherent form factor, while giving a wide range of people – from data scientists to line of business owners – their preferred tools for accessing and using data.


With Azure Synapse, there’s no code required. It’s a much more intuitive experience.”

~ Rohan Kumar, corporate vice president for Azure Data


That includes products like SQL Server, the open source programming interface Apache Spark, Azure Data Factory and Azure Data Studio, as well as notebook interfaces preferred by many data professionals to clean and model data.

“Getting all those capabilities to come together fluidly, making it run faster, simpler, eliminating overlapping processes – there was some scary good stuff getting done,” Ramakrishnan said.

The result is a data analytics system that will be as easy to use as a modern mobile phone. Just as the smartphone replaced several devices by making all of their core capabilities intuitively accessible in a single device, the Azure Synapse “smartphone for data” now allows a data engineer to build an entire end-to-end data pipeline in one place. It also enables data scientists and analysts to look at the underlying data in ways that are natural to them.

And just as the phone has driven waves of collaboration and business innovation, Azure Synapse will free up individuals and companies to introduce new products and services as quickly as they can dream them up, Microsoft said.

“If we can help different people view data through a lens that is natural to them, while it’s also visible to others in ways natural to them, then we will transform the way companies work,” Ramakrishnan said. That’s how we should measure our success.

Top photo: Rohan Kumar, corporate vice president for Azure Data, says Azure Synapse will deliver useful business insights that used to take days or even weeks and months. Photo by Scott Eklund/Red Box Pictures.

Related:

Go to Original Article
Author: Microsoft News Center

How to Supercharge PowerShell Objects for Hyper-V

The best thing about working with PowerShell is that everything is an object. This makes it easy to work with the properties of the thing you need to manage without a lot of scripting or complex text parsing. For example, when using the Hyper-V PowerShell module it is trivial to get specific details of a virtual machine.

Getting virtual machine properties with PowerShell

There are many properties to this object. You can pipe the object to Get-Member to see the definition or Select-Object to show all properties and their values.

But what you want more? I usually do. For example, there is a CreationTime property for the virtual machine object. I’d like to be able to report on how old the virtual machine is. For a one-and-done approach I could write a command like this:

Defining a new custom property

Or maybe I’d like the MemoryAssigned value to be formatted in MB.

Defining a custom property for assigned memory

The syntax is to define a hashtable with 2 keys: Name and Expression. The Name value is what you want to use for your custom property name and the Expression is a PowerShell scriptblock. In the scriptblock you can run as much code as you need. Use $_ to reference the current object in the pipeline. In the last example, PowerShell looks at the MemoryAssigned value for the CentOS virtual machine object and divides it by 1MB. Then it moves on the DOM1 and divides the value (2147483648) by 1MB to arrive at 2048 and so on.

Using Select-Object this way is great for your custom scripts and functions. But suppose I want to get these values all the time when working with the object in the console interactively. I don’t want to have to type all that Select-Object code every single time. Fortunately, there is another approach.

PowerShell’s Extensible Type System

PowerShell has an extensible type system. This means you can extend or modify the type definition of an object. PowerShell takes advantage of this feature all the time. Many of the properties you’ve seen in common objects have been added by Microsoft. You can use the Get-TypeData cmdlet to view what additions have been made to a given type.

default extensions to the virtual machine object

You can see these additions with Get-Member.

Viewing the type additions

By now, you are thinking, “How can I do this?”. The good news is that it isn’t too difficult. Back in the PowerShell stone age you would have had to create a custom XML file, which you can still do. But I think it is just as easy to use the Update-TypeData cmdlet. I’ll give you some examples, but please take the time to read the full help and examples. One thing to keep in mind is that any type extensions you make last only as long as your PowerShell session is running. The next time you start PowerShell you will have to re-define them. I dot source a script file in my profile script that adds my type customizations.

Updating Type Data

The first thing you will need to know is the object’s type name. You can see that when piping to Get-Member.

When you define a new type you need to determine the membertype. This will be something like a ScriptProperty or Alias. A ScriptProperty is a value that is calculated by running some piece of PowerShell code. If you’ve created a custom type with Select-Object to test, as I did earlier, you will re-use that expression scriptblock. And of course, you need to define a name for your new property.

With this code, I’m defining a new script property called Age. The value will be the result of the scriptblock that subtracts the CreationTime property of the object from now. The one difference to note compared to my Select-Object version is that here use $this instead of $_.

I went ahead and also created an alias property of “Created” that points to “CreationTime”. If you try to create a type extension that already exists PowerShell will complain. I typically use -Force to overwrite any existing definitions. But now I can see these new type members.

Verifying new VM type extensions

And of course, I can use them in PowerShell.

Using the new type extensions in PowerShell

There’s really no limit to what you can do. Here is a script property that tells me when the VM started.

and here is some code that takes the memory properties and creates an ‘MB’ version of each one. I’m using some PowerShell scripting and splatting to simplify the process.

Now I have all sorts of information at my fingertips.

Taking advantage of all the new properties

As long as I run the Update-Typedata commands I will have this information available. In my PSHyperVTools module, which you can install from the PowerShell Gallery, it will add some of these type extensions automatically, plus a few more. You can read about it in the project’s GitHub repository.

Does this look like something you would use? What sort of type extensions did you come up with? What else do you wish you could see or do? Comments and feedback are always welcome.


Go to Original Article
Author: Jeffery Hicks

Wanted – MacBook Pro 15”

Hi
I am thinking about selling my MacBook Pro
Apple Macbook Pro 15″
2018 A1990
2.90GHz i9, 32GB, 1TB SSD
Retina, Touch Bar

This is in Excellent Condition with Box, Lead and Charger
Barely a sign of use.
Cycle count of 12
Space Grey, Touch Bar and Touch ID
15.4″ Retina LED, 2.9GHz 6 Core Intel Core i9
32GB of 2400Mhz DDR4 SD RAM
1TB PCle based SSD, Radeon Pro Vega 20
Mojave macOS

Has Apple Care+ Until 2022 April

£3k

Go to Original Article
Author: