Tag Archives: service

Logically acquires Carolinas IT in geographic expansion

Logically Inc., a managed service provider based in Portland, Maine, has acquired Carolinas IT, a North Carolina MSP with cloud, security and core IT infrastructure skills.

The deal continues Logically’s geographic expansion. The company launched in April, building upon the 2018 merger of Portland-based Winxnet Inc. and K&R Network Solutions of San Diego. Logically in August added a New York metro area company through its purchase of Sullivan Data Management, an outsourced IT services firm.

Carolinas IT, based in Raleigh, provides a base from which Logically can expand in the region, Logically CEO Christopher Claudio said. “They are a good launching pad,” he said.

Carolinas IT’s security and compliance practice also attracted Logically’s interest. Claudio called security and compliance a growing market and one that will continue to expand, given the challenges organizations face with regulatory obligations and the risk of data breaches.

“You can’t be an MSP without a security and compliance group,” he said.

Carolinas IT’s security services include risk assessment, HIPAA consulting and auditing, security training and penetration testing. The company’s cloud offerings include Office 365 and private hosted cloud services. And its professional services personnel possess certifications from vendors such as Cisco, Citrix, Microsoft, Symantec and VMware.

Christopher ClaudioChristopher Claudio

Claudio cited Carolinas IT’s “depth of talent,” recurring revenue and high client retention rate as some of the business’s favorable attributes.

Mark Cavaliero, Carolinas IT’s president and CEO, will remain at the company for the near term but plans to move on and will not have a long-term leadership role, Claudio said. But Cavaliero will have an advisory role, Claudio added, noting “he has built a great business.”

Logically’s Carolinas IT purchase continues a pattern of companies pulling regional MSPs together to create national service providers. Other examples include Converge Technology Solutions Corp. and Mission.

Go to Original Article

AWS, Azure and Google peppered with outages in same week

AWS, Microsoft Azure and Google Cloud all experienced service degradations or outages this week, an outcome that suggests customers should accept that cloud outages are a matter of when, not if.

In AWS’s Frankfurt region, EC2, Relational Database Service, CloudFormation and Auto Scaling were all affected Nov. 11, with the issues now resolved, according to AWS’s status page.

Azure DevOps services for Boards, Repos, Pipelines and Test Plans were affected for a few hours in the early hours of Nov. 11, according to its status page. Engineers determined that the problem had to do with identity calls and rebooted access tokens to fix the system, the page states.

Google Cloud said some of its APIs in several U.S. regions were affected, and others experienced problems globally on Nov. 11, according to its status dashboard. Affected APIs included those for Compute Engine, Cloud Storage, BigQuery, Dataflow, Dataproc and Pub/Sub. Those issues were resolved later in the day.

Google Kubernetes Engine also went through some hiccups over the past week, in which nodes in some recently upgraded container clusters resulted in high levels of kernel panics. Known more colloquially as the “blue screen of death” and other terms, kernel panics are conditions wherein a system’s OS can’t recover from an error quickly or easily.

The company rolled out a series of fixes, but as of Nov. 13, the status page for GKE remained in orange status, which indicates a small number of projects are still affected.

AWS, Microsoft and Google have yet to provide the customary post-mortem reports on why the cloud outages occurred, although more information could emerge soon.

Move to cloud means ceding some control

The cloud outages at AWS, Azure and Google this week were far from the worst experienced by customers in recent years. In September 2018, severe weather in Texas caused a power surge that shut down dozens of Azure services for days.

Stephen ElliotStephen Elliot

Cloud providers have aggressively pursued region and zone expansions to help with disaster recovery and high-availability scenarios. But customers must still architect their systems to take advantage of the expanded footprint.

Still, customers have much less control when it comes to public cloud usage, according to Stephen Elliot, an analyst at IDC. That reality requires some operational sophistication.

It’s a myth that outages won’t happen.
Stephen ElliotAnalyst, IDC

“Networks are so interconnected and distributed, lots of partners are involved in making a service perform and available,” he said. “[Enterprises] need a risk mitigation strategy that covers people, process, technologies, SLAs, etc. It’s a myth that outages won’t happen. It could be from weather, a black swan event, security or a technology glitch.”

Jay LymanJay Lyman

This fact underscores why more companies are experimenting with and deploying workloads across hybrid and multi-cloud infrastructures, said Jay Lyman, an analyst at 451 Research. “They either control the infrastructure and downtime with on-premises deployments or spread their bets across multiple public clouds,” he said.

Ultimately, enterprise IT shops can weigh the challenges and costs of running their own infrastructure against public cloud providers and find it difficult to match, said Holger Mueller, an analyst at Constellation Research.

“That said, performance and uptime are validated every day, and should a major and longer public cloud outage happen, it could give pause among less technical board members,” he added.

Go to Original Article

Datrium opens cloud DR service to all VMware users

Datrium plans to open its new cloud disaster recovery as a service to any VMware vSphere users in 2020, even if they’re not customers of Datrium’s DVX infrastructure software.

Datrium released disaster recovery as a service with VMware Cloud on AWS in September for DVX customers as an alternative to potentially costly professional services or a secondary physical site. DRaaS enables DVX users to spin up protected virtual machines (VMs) on demand in VMware Cloud on AWS in the event of a disaster. Datrium takes care of all of the ordering, billing and support for the cloud DR.

In the first quarter, Datrium plans to add a new Datrium DRaaS Connect for VMware users who deploy vSphere infrastructure on premises and do not use Datrium storage. Datrium DraaS Connect software would deduplicate, compress and encrypt vSphere snapshots and replicate them to Amazon S3 object storage for cloud DR. Users could set backup policies and categorize VMs into protection groups, setting different service-level agreements for each one, Datrium CTO Sazzala Reddy said.

A second Datrium DRaaS Connect offering will enable VMware Cloud users to automatically fail over workloads from one AWS Availability Zone (AZ) to another if an Amazon AZ goes down. Datrium stores deduplicated vSphere snapshots on Amazon S3, and the snapshots replicated to three AZs by default, Datrium chief product officer Brian Biles said.

Speedy cloud DR

Datrium claims system recovery can happen on VMware Cloud within minutes from the snapshots stored in Amazon S3, because it requires no conversion from a different virtual machine or cloud format. Unlike some backup products, Datrium does not convert VMs from VMware’s format to Amazon’s format and can boot VMs directly from the Amazon data store.

“The challenge with a backup-only product is that it takes days if you want to rehydrate the data and copy the data into a primary storage system,” Reddy said.

Although the “instant RTO” that Datrium claims to provide may not be important to all VMware users, reducing recovery time is generally a high priority, especially to combat ransomware attacks. Datrium commissioned a third party to conduct a survey of 395 IT professionals, and about half said they experienced a DR event in the last 24 months. Ransomware was the leading cause, hitting 36% of those who reported a DR event, followed by power outages (26%).

The Orange County Transportation Authority (OCTA) information systems department spent a weekend recovering from a zero-day malware exploit that hit nearly three years ago on a Thursday afternoon. The malware came in through a contractor’s VPN connection and took out more than 85 servers, according to Michael Beerer, a senior section manager for online system and network administration of OCTA’s information systems department.

Beerer said the information systems team restored critical applications by Friday evening and the rest by Sunday afternoon. But OCTA now wants to recover more quickly if a disaster should happen again, he said.

OCTA is now building out a new data center with Datrium DVX storage for its VMware VMs and possibly Red Hat KVM in the future. Beerer said DVX provides an edge in performance and cost over alternatives he considered. Because DVX disaggregates storage and compute nodes, OCTA can increase storage capacity without having to also add compute resources, he said.

Datrium cloud DR advantages

Beerer said the addition of Datrium DRaaS would make sense because OCTA can manage it from the same DVX interface. Datrium’s deduplication, compression and transmission of only changed data blocks would also eliminate the need for a pricy “big, fat pipe” and reduce cloud storage requirements and costs over other options, he said. Plus, Datrium facilitates application consistency by grouping applications into one service and taking backups at similar times before moving data to the cloud, Beerer said.

Datrium’s “Instant RTO” is not critical for OCTA. Beerer said anything that can speed the recovery process is interesting, but users also need to weigh that benefit against any potential additional costs for storage and bandwidth.

“There are customers where a second or two of downtime can mean thousands of dollars. We’re not in that situation. We’re not a financial company,” Beerer said. He noted that OCTA would need to get critical servers up and running in less than 24 hours.

Reddy said Datrium offers two cost models: a low-cost option with a 60-minute window and a “slightly more expensive” option in which at least a few VMware servers are always on standby.

Pricing for Datrium DRaaS starts at $23,000 per year, with support for 100 hours of VMware Cloud on-demand hosts for testing, 5 TB of S3 capacity for deduplicated and encrypted snapshots, and up to 1 TB per year of cloud egress. Pricing was unavailable for the upcoming DRaaS Connect options.

Other cloud DR options

Jeff Kato, a senior storage analyst at Taneja Group, said the new Datrium options would open up to all VMware customers a low-cost DRaaS offering that requires no capital expense. He said most vendors that offer DR from their on-premises systems to the cloud force customers to buy their primary storage.

George Crump, president and founder of Storage Switzerland, said data protection vendors such as Commvault, Druva, Veeam, Veritas and Zerto also can do some form of recovery in the cloud, but it’s “not as seamless as you might want it to be.”

“Datrium has gone so far as to converge primary storage with data protection and backup software,” Crump said. “They have a very good automation engine that allows customers to essentially draw their disaster recovery plan. They use VMware Cloud on Amazon, so the customer doesn’t have to go through any conversion process. And they’ve solved the riddle of: ‘How do you store data in S3 but recover on high-performance storage?’ “

Scott Sinclair, a senior analyst at Enterprise Strategy Group, said using cloud resources for backup and DR often means either expensive, high-performance storage or lower cost S3 storage that requires a time-consuming migration to get data out of it.

“The Datrium architecture is really interesting because of how they’re able to essentially still let you use the lower cost tier but make the storage seem very high performance once you start populating it,” Sinclair said.

Go to Original Article

Experts on demand: Your direct line to Microsoft security insight, guidance, and expertise – Microsoft Security

Microsoft Threat Experts is the managed threat hunting service within Microsoft Defender Advanced Threat Protection (ATP) that includes two capabilities: targeted attack notifications and experts on demand.

Today, we are extremely excited to share that experts on demand is now generally available and gives customers direct access to real-life Microsoft threat analysts to help with their security investigations.

With experts on demand, Microsoft Defender ATP customers can engage directly with Microsoft security analysts to get guidance and insights needed to better understand, prevent, and respond to complex threats in their environments. This capability was shaped through partnership with multiple customers across various verticals by investigating and helping mitigate real-world attacks. From deep investigation of machines that customers had a security concern about, to threat intelligence questions related to anticipated adversaries, experts on demand extends and supports security operations teams.

The other Microsoft Threat Experts capability, targeted attack notifications, delivers alerts that are tailored to organizations and provides as much information as can be quickly delivered to bring attention to critical threats in their network, including the timeline, scope of breach, and the methods of intrusion. Together, the two capabilities make Microsoft Threat Experts a comprehensive managed threat hunting solution that provides an additional layer of expertise and optics for security operations teams.

Experts on the case

By design, the Microsoft Threat Experts service has as many use cases as there are unique organizations with unique security scenarios and requirements. One particular case showed how an alert in Microsoft Defender ATP led to informed customer response, aided by a targeted attack notification that progressed to an experts on demand inquiry, resulting in the customer fully remediating the incident and improving their security posture.

In this case, Microsoft Defender ATP endpoint protection capabilities recognized a new malicious file in a single machine within an organization. The organization’s security operations center (SOC) promptly investigated the alert and developed the suspicion it may indicate a new campaign from an advanced adversary specifically targeting them.

Microsoft Threat Experts, who are constantly hunting on behalf of this customer, had independently spotted and investigated the malicious behaviors associated with the attack. With knowledge about the adversaries behind the attack and their motivation, Microsoft Threat Experts sent the organization a bespoke targeted attack notification, which provided additional information and context, including the fact that the file was related to an app that was targeted in a documented cyberattack.

To create a fully informed path to mitigation, experts pointed to information about the scope of compromise, relevant indicators of compromise, and a timeline of observed events, which showed that the file executed on the affected machine and proceeded to drop additional files. One of these files attempted to connect to a command-and-control server, which could have given the attackers direct access to the organization’s network and sensitive data. Microsoft Threat Experts recommended full investigation of the compromised machine, as well as the rest of the network for related indicators of attack.

Based on the targeted attack notification, the organization opened an experts on demand investigation, which allowed the SOC to have a line of communication and consultation with Microsoft Threat Experts. Microsoft Threat Experts were able to immediately confirm the attacker attribution the SOC had suspected. Using Microsoft Defender ATP’s rich optics and capabilities, coupled with intelligence on the threat actor, experts on demand validated that there were no signs of second-stage malware or further compromise within the organization. Since, over time, Microsoft Threat Experts had developed an understanding of this organization’s security posture, they were able to share that the initial malware infection was the result of a weak security control: allowing users to exercise unrestricted local administrator privilege.

Experts on demand in the current cybersecurity climate

On a daily basis, organizations have to fend off the onslaught of increasingly sophisticated attacks that present unique security challenges in security: supply chain attacks, highly targeted campaigns, hands-on-keyboard attacks. With Microsoft Threat Experts, customers can work with Microsoft to augment their security operations capabilities and increase confidence in investigating and responding to security incidents.

Now that experts on demand is generally available, Microsoft Defender ATP customers have an even richer way of tapping into Microsoft’s security experts and get access to skills, experience, and intelligence necessary to face adversaries.

Experts on demand provide insights into attacks, technical guidance on next steps, and advice on risk and protection. Experts can be engaged directly from within the Windows Defender Security Center, so they are part of the existing security operations experience:

We are happy to bring experts on demand within reach of all Microsoft Defender ATP customers. Start your 90-day free trial via the Microsoft Defender Security Center today.

Learn more about Microsoft Defender ATP’s managed threat hunting service here: Announcing Microsoft Threat Experts.

Go to Original Article
Author: Microsoft News Center

Google Cloud tackles Spark on Kubernetes

An early version of a Google Cloud service that runs Apache Spark on Kubernetes is now available, but more work will be required to flesh out the container orchestration platform’s integrations with data analytics tools.

Kubernetes and containers haven’t been renowned for their use in data-intensive, stateful applications, including data analytics. But there are benefits to using Kubernetes as a resource orchestration layer under applications such as Apache Spark rather than the Hadoop YARN resource manager and job scheduling tool with which it’s typically associated. Developers and IT ops stand to gain advantages that containers bring to any application, such as portability across systems and consistency in configuration, along with automated provisioning and scaling for workloads that’s handled in the Kubernetes layer or by Helm charts, as well as container resource efficiency compared with virtual or bare metal machines.

“Analytical workloads, in particular, benefit from the ability to add rapidly scalable cloud capacity for spiky peak workloads, whereas companies might want to run routine, predictable workloads in a virtual private cloud,” said Doug Henschen, an analyst at Constellation Research in Cupertino, Calif. 

Google, which offers managed versions of Apache Spark and Apache Hadoop that run on YARN through its Cloud Dataproc service, would prefer to use its own Kubernetes platform to orchestrate resources — and to that end, released an alpha preview integration for Spark on Kubernetes within Cloud Dataproc this week. Other companies, such as Databricks (run by the creators of Apache Spark) and D2iQ (formerly Mesosphere), support Spark on Kubernetes, but Google Cloud Dataproc stands to become the first of the major cloud providers to include it in a managed service.

Customers don’t care about managing Hive or Pig, and want to use Kubernetes in hybrid clouds.
James MaloneProduct manager, Google Cloud Dataproc

Apache Spark has had a native Kubernetes scheduler since version 2.3, and Hadoop added native container support in Hadoop 3.0.3, both released in May 2018. However, Hadoop’s container support is still tied to HDFS and is too complex, in Google’s view.

“People have gotten Docker containers running on Hadoop clusters using YARN, but Hadoop 3’s container support is probably about four years too late,” said James Malone, product manager for Cloud Dataproc at Google. “It also doesn’t really solve the problems customers are trying to solve, from our perspective — customers don’t care about managing [Apache data warehouse and analytics apps] Hive or Pig, and want to use Kubernetes in hybrid clouds.”

Spark on Kubernetes only scratches the surface of big data integration

Cloud Dataproc’s Spark on Kubernetes implementation remains in a very early stage, and will require updates upstream to Spark as well as Kubernetes before it’s production-ready. Google also has its sights set on support for more Apache data analytics apps, including the Flink data stream processing framework, Druid low-latency data query system and Presto distributed SQL query engine.

“It’s still in alpha, and that’s by virtue of the fact that the work that we’ve done here has been split into multiple streams,” Malone said. One of those workstreams is to update Cloud Dataproc to run Kubernetes clusters. Another is to contribute to the upstream Spark Kubernetes operator, which remains in the experimental stage within Spark Core. Finally, Cloud Dataproc must brush up performance enhancement add-ons such as external shuffle service support, which aids in the dynamic allocation of resources.

For now, IT pros who want to run Spark on Kubernetes must assemble their own integrations among the upstream Spark Kubernetes scheduler, supported Spark from Databricks, and Kubernetes cloud services. Customers that seek hybrid cloud portability for Spark workloads must also implement a distributed storage system from vendors such as Robin Systems or Portworx. All of it can work, but without many of the niceties about fully integrated cloud platform services that would make life easier.

For example, the Spark Kubernetes executor uses Python, rather than the Scala programming language, which is a bit trickier to use.

“The Python experience of Spark in Kubernetes has always lagged the Scala experience, mostly because deploying a compiled artifact in Scala is just easier logistically than pulling in dependencies for Python jobs,” said Michael Bishop, co-founder and board member at Alpha Vertex, a New York-based fintech startup that uses machine learning deployed in a multi-cloud Kubernetes infrastructure to track market trends for financial services customers. “This is getting better and better, though.”

There also remain fundamental differences between Spark’s job scheduler and Kubernetes that must be smoothed out, Bishop said.

“There is definitely an impedance [between the two schedulers,” he said. “Spark is intimately aware of ‘where’ is for [nodes], while Kubernetes doesn’t really care beyond knowing a pod needs a particular volume mounted.”

Google will work on sanding down these rough edges, Malone pledged.

“For example, we have an external shuffle service, and we’re working hard to make it work with both YARN and Kubernetes Spark,” he said.

Go to Original Article

HashiCorp Consul plays to multi-platform strength with Azure

Microsoft Azure users will get a hosted version of the HashiCorp Consul service mesh as multi-platform interoperability becomes a key feature for IT shops and cloud providers alike.

Service mesh is an architecture for microservices networking that uses a sidecar proxy to orchestrate and secure network connections among complex ephemeral services. HashiCorp Consul is one among several service mesh control planes available, but its claim to fame for now is that it can connect multiple VM-based or container-based applications in any public cloud region or on-premises deployment, through the Consul Connect gateway released last year.

HashiCorp Consul Service on Azure (HCS), released to private beta this week, automatically provisions clusters that run Consul service discovery and service mesh software within Azure. HashiCorp site reliability engineers also manage the service behind the scenes, but it’s billed through Azure and provisioned via the Azure console and the Azure Managed Applications service catalog.

The two companies unveiled this expansion to their existing partnership this week at HashiConf in Seattle, and touted their work together on service mesh interoperability, which also includes the Service Mesh Interface (SMI), released in May. SMI defines a set of common APIs that connect multiple service mesh control planes such as Consul, Istio and Linkerd.

Industry watchers expect such interconnection — and coopetition — to be a priority for service mesh projects, at least in the near future, as enterprises struggle to make sense of mixed infrastructures that include legacy applications on bare metal along with cloud-native microservices in containers.

“The only way out is to get these different mesh software stacks to interoperate,” said John Mitchell, formerly chief platform architect at SAP Ariba, a HashiCorp Enterprise shop, and now an independent digital transformation consultant who contracts with HashiCorp, among others. “They’re all realizing they can’t try to be the big dog all by themselves, because it’s a networking problem. Standardization of that interconnect, that basic interoperability, is the only way forward — or they all fail.”

Nobody who’s serious about containers in production has just one Kubernetes cluster. Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.
John MitchellIndependent consultant

Microsoft and HashiCorp talked up multi-cloud management as a job for service mesh, but real-world multi-cloud deployments are still a bleeding-edge scenario at best among enterprises. However, the same interoperability problem faces any enterprise with multiple Kubernetes clusters, or assets deployed both on premises and in the public cloud, Mitchell said.

“Nobody who’s serious about containers in production has just one Kubernetes cluster,” he said. “Directionally, multiplatform interoperability is where everybody has to go, whether they realize it yet or not.”

The tangled web of service mesh interop

For now, Consul has a slight edge over Google and IBM’s open source Istio service mesh control plane, in the maturity of its Consul Connect inter-cluster gateway and ability to orchestrate VMs and bare metal in addition to Kubernetes-orchestrated containers. Clearly, it’s pushing this edge with HashiCorp Consul Service on Azure, but it won’t be long before Istio catches up. Istio Gateway and Istio Multicluster projects both emerged this year, and the ability to integrate virtual machines is also in development. Linkerd has arguably the best production-use bona fides in VM-based service mesh orchestration. All the meshes use the same Envoy data plane, which will make differentiation between them in the long term even more difficult.

“Service mesh will become like electricity, just something you expect,” said Tom Petrocelli, an analyst at Amalgam Insights in Arlington, Mass. “The vast majority of people will go with what’s in their preferred cloud platform.”

HCS could boost Consul’s profile, given Microsoft’s strength as a cloud player — but it will depend more on how the two companies market it than its technical specifications, Petrocelli said. At this stage, Consul doesn’t interoperate with Azure Service Fabric, Microsoft’s original hosted service mesh, which is important if it’s to get widespread adoption, in Petrocelli’s view.

“I’m not really going to get excited about something in Azure that doesn’t take advantage of Azure’s own fabric,” he said.

Without Service Fabric integration to widen Consul’s appeal to Azure users, it’s likely the market for HCS will pull in many new customers, Petrocelli said. Also, whether Microsoft positions HCS as its service mesh of choice for Azure, or makes it one among many hosted service mesh offerings, will decide how widely used it will be, in his estimation.

“If [HCS] is one of many [service mesh offerings on Azure], it’s nice if you happen to be a HashiCorp customer that also uses Azure,” Petrocelli said.

Go to Original Article

How to handle service downtime in the cloud age

As we all know, the cloud is not a cure-all. What happens when you have a service outage because your cloud went offline or your internet provider experiences issues?

If your organization depends on SaaS services, with some planning, you can weather the storm with IT infrastructure backup servers that can help you to interact with customers, answer questions and attempt some level of business functionality. When you lose the connection to the cloud or a service, it shouldn’t be the end of your business work. You can set up a safety net in the form of backup virtual machines for the essential services that have moved to the cloud.

With so much focus today on the cloud, it’s almost impossible to think about what would happen if it wasn’t there. Azure, AWS and all the major cloud providers have measures in place to prevent them from ever fully going offline. While that is ideal for the cloud and cloud vendors, that doesn’t mean you can always connect to that cloud. A DDoS attack against your location or internet provider could prevent you from getting access to the cloud. Something as simple as a backhoe that cuts through cables near your facility can remove that cloud connectivity in the most non-technical method possible. So, while the cloud might not go down, your connection to it might. How do you cope in that sort of situation?

network connection issues
When networking issues strike and your users get this message, then you need to make sure you have some way to keep the business running in the form of IT infrastructure backups.

Take stock of your application inventory

One of the fundamental questions you should ask is: where are your applications? If you run SaaS-based applications for Office, CRM, sales and just about everything else, then you have to come up with a backup to these cloud services. Having a working IT infrastructure without working applications does not help the business. One of the challenges with many SaaS-based applications is that they don’t support an offline mode.

Some applications such as Office 365 — depending on your licensing — allow local installations, which is ideal so long as your files remain local as well. That is where we get into the challenges: each time you address one piece on-site or in the cloud, then something else comes up. It’s not often that we map out what it takes to do a specific task, because we assume the interconnected pieces will always work. That lack of foresight puts your business in a dangerous situation. 

There are limits to what an emergency backup system can do

So, this brings up the question of how functional would you want your staff to be in the event of an outage? It’s not realistic to be fully functional in a service downtime situation; technically, anything is possible with unlimited resources. Paying for duplicate infrastructure and SaaS services can be done, but that would most likely be inefficient from a cost and coordination standpoint.

Instead, start with simple aspects such as email, documents and the desktop. The first hurdle to surmount is the ability to log in. The Windows OS almost requires it to be connected to the internet. Try disconnecting your desktop and powering it up; even on a home machine, the amount of lag will make the system crawl as it times out while trying to log in and find all the connected services. Unlike home machines with local accounts, when you have a domain then you need locally stored login credentials or you’re not getting in. Now local accounts and cached credentials are not exactly security best practices, but you need to balance security with the need to get employees working. Most laptop users won’t suffer through this since they are usually set up to work offline, but it takes additional steps that are not traditionally done for desktops. 

If you can log in, what about the applications? If you enabled offline access in Outlook, then you will have access to email that was pulled down before the outage. The same can be said if you use OneDrive and had it sync before you lost connection. Simple offline email access, file shares and printing might be all you’re limited to, depending on the infrastructure, but that is better than having your staff staring at their desktops because they can’t access their data. 

Look into ways to back up IT infrastructure services

You need to evaluate how much of the networking and infrastructure services — specifically domain name services, internet protocol, file and print services — you want to keep in the data center. While it’s possible to put most of these infrastructure services in the cloud, you should consider backing up these services in reserve as virtual machines. 

Unless you have moved everything offsite and have no on-premises resources, you could use a few Hyper-V hosts for those workloads that can’t move to Azure. What prevents you from having backup infrastructure servers as powered-down virtual machines on your Hyper-V hosts? They don’t need to be on shared storage; local storage would work just fine for these emergency servers. 

Unless you have moved everything offsite and have no on-premises resources, you could use a few Hyper-V hosts for those workloads that can’t be placed in Azure.

The cloud comes with so many offerings; it’s next to impossible to ignore everything it can do. But part of the challenge with this migration is to set up backup servers for the services that have moved into the cloud. For some services, such as Active Directory, it can be a bit trickier to keep them in cold storage, but it is possible. 

It’s important to maintain these backup infrastructure servers. It can’t be an “install once and forget it” situation. Update them several times a year to ensure, should you need it, your backups can fill the void when a connectivity issue arises.

It helps to note when changes occur with key systems, such as DNS, DHCP, and other network servers. How often do you create additional DHCP scopes? During this review process, you might find much of the infrastructure is more static than you realized, which helps when you’re formulating a plan to keep backup infrastructure up to date. These backups aren’t meant to be a straight swap, but rather something to give you internal networking and some additional functionality even when you can’t get traffic outside your building or it is limited in some fashion. It’s not meant to be perfect, but something to keep the business going while repairs are under way.

Go to Original Article

SolarWinds Discovery offers low-cost way to manage IT assets

IT asset management software vendor SolarWinds launched a software service aimed at IT organizations that seek a low-cost way to keep tabs on IT assets and improve their service delivery.

The software, called SolarWinds Discovery, is a SaaS tool designed to help IT teams locate, map and manage their software and hardware assets. The software combines both agent and agentless technology to provide a view into critical assets, providing insights for IT pros who manage and monitor those assets.

“IT service delivery requires managing the lifecycle of the technology that enables customers to meet their needs,” said Gartner analyst Roger Williams. “Many organizations, however, do not have visibility into everything on their network due to poor controls and an inability to keep up with the pace of change in their environment.”

The agentless SolarWinds Discovery Scanner locates and collects information on IP connected devices, like servers, routers, switches, firewalls, storage arrays, VMware hosts, VMs and printers, according to the company.

The SolarWinds Discovery agent can collect more than 200 data points from Windows and Apple computers and servers, as well as iOS and Android mobile devices. The software integrates with Microsoft System Center Configuration Manager, VMware vCenter and Chrome OS.

The new service integrates with SolarWinds Service Desk, enabling enterprises to focus on risks affecting IT services, as well as to comply with software licensing contracts. The tool can also import data from key configuration management sources, enabling organizations to regularly update all their asset data and make it available within SolarWinds Service Desk, the company said.

The product was launched on August 21 and is available only under SolarWinds Service Desk. Cost is per month, per agent and billed annually. A free one-month trial is available on the company’s website.

Williams said IT asset management is part of Gartner’s Software Asset Management, IT Asset Management and IT Financial Management category that saw $1.24 billion in revenue in 2018, a 23.4% increase over 2017.

He said vendors in this market are challenged to offer a product with features unique from what is already available, considering there are over 100 competitors. He said SolarWinds has the largest presence of any vendor in the related network performance monitoring and diagnostics market and is used as a discovery source by many organizations.

Competitors include ManageEngine, BMC Software and IBM, to cite a few examples. ManageEngine ServiceDesk combines asset management and help desk functionalities in one platform. BMC Helix offers digital and cognitive automation technologies intended to provide efficient service management across any environment. IBM Maximo offers a tool to analyze IoT data from people, sensors and devices to gain asset visibility.

Go to Original Article

How technology has transformed my working life – Microsoft News Centre Europe

“We’re sorry to announce that the 07:12 Thameslink service to London St. Pancras International has been cancelled. Please stand by for further announcements. We apologise for any inconvenience this may have caused to your journey.”

A few years ago, my working life was very different. For six years, I spent four hours each day crammed across six trains, commuting to work and back. It was financially and emotionally draining, but it was simply the way that things were done.   

It was only two and a half years ago, when I joined Microsoft as editor of its European news centre, that I realised that the traditional way we work, and our accustomed routines, could be different. Today, I have the freedom and flexibility to work from home, with my cat Meze purring away beside me.

I want to share my experiences here, not because I work for Microsoft, but because I’m truly passionate about this new way of working, and am grateful for the hugely positive impact it’s had on my life. This is my future of work.

Microsoft MunichBuilding bridges
On my first day, I had some reservations. They say that no man is an island, but in a professional, geographical sense, I come pretty close – I’m the only one in my direct team that lives and works in the UK. Others are scattered across Germany, Ukraine, Turkey, Bulgaria, and even South Africa – not to mention all the other people I work with around the globe, from the USA to Singapore. Bar the occasional business trips, I attend meetings and work with everyone remotely. It was a daunting prospect. I worried about being isolated, and the quality of work that could be achieved with colleagues that were hundreds of miles away. Would I feel close to them? Would I achieve my best work? Would I make friends?

Three years on, I look back on my first day jitters and realise that they were totally unfounded. Thanks to Teams, I truly feel like I’m working in the same office with my colleagues. A quick question or discussion is a mere chat window away, allowing me to instantly solve problems and give/receive advice – not to mention sending the occasional cat gif or two.

Beyond ad-hoc chats, we use video calls – a prospect which I found daunting, until I actually tried it. There’s something vulnerable, I feel, about putting yourself on camera, and I was worried it would be a distraction. In fact, I’ve found it’s the opposite.

Being able to see the people you’re talking to increases personal connections and engagements. It transforms someone from an ethereal voice to an actual person, and it doesn’t take long for the technology to melt away and become invisible. You’re just a group of people, in a room, having a chat – nothing more, nothing less.

Go to Original Article
Author: Microsoft News Center

Adobe Experience Platform adds features for data scientists

After almost a year in beta, Adobe has introduced Query Service and Data Science Workspace to the Adobe Experience Platform to enable brands to deliver tailored digital experiences to their customers, with real-time data analytics and understanding of customer behavior.

Powered by Adobe Sensei, the vendor’s AI and machine learning technology, Query Service and Data Science Workspace intend to automate tedious, manual processes and enable real-time data personalization for large organizations.

The Adobe Experience Platform — previously the Adobe Cloud Platform — is an open platform for customer experience management that synthesizes and breaks down silos for customer data in one unified customer profile.

According to Adobe, the volume of data organizations must manage has exploded. IDC predicted the Global DataSphere will grow from 33 zettabytes in 2018 to 175 zettabytes by 2025. And while more data is better, it makes it difficult for businesses and analysts to sort, digest and analyze all of it to find answers. Query Service intends to simplify this process, according to the vendor.

Query Service enables analysts and data scientists to perform queries across all data sets in the platform instead of manually combing through siloed data sets to find answers for data-related questions. Query Service supports cross-channel and cross-platform queries, including behavioral, point-of-sale and customer relationship management data. Query Service enables users to do the following:

  • run queries manually with interactive jobs or automatically with batch jobs;
  • subgroup records based on time and generate session numbers and page numbers;
  • use tools that support complex joins, nested queries, window functions and time-partitioned queries;
  • break down data to evaluate key customer events; and
  • view and understand how customers flow across all channels.

While Query Service simplifies the data identification process, Data Science Workspace helps to digest data and enables data scientists to draw insights and take action. Using Adobe Sensei’s AI technology, Data Science Workspace automates repetitive tasks and understands and predicts customer data to provide real-time intelligence.

Also within Data Science Workspace, users can take advantage of tools to develop, train and tune machine learning models to solve business challenges, such as calculating customer predisposition to buy certain products. Data scientists can also develop custom models to pull particular insights and predictions to personalize customer experiences across all touchpoints.

Additional capabilities of Data Science Workstation enable users to perform the following tasks:

  • explore all data stored in Adobe Experience Platform, as well as deep learning libraries like Spark ML and TensorFlow;
  • use prebuilt or custom machine learning recipes for common business needs;
  • experiment with recipes to create and train tracked unlimited instances;
  • publish intelligent services recipes without IT to Adobe I/O; and
  • continuously evaluate intelligent service accuracy and retrain recipes as needed.

Adobe data analytics features Query Service and Data Science Workspace were first introduced as part of the Adobe Experience Platform in beta in September 2018. Adobe intends these tools to improve how data scientists handle data on the Adobe Experience Platform and create meaningful models off of which developers can work. 

Go to Original Article