Tag Archives: software

StorOne attacks bottlenecks with new TRU storage software

Startup StorOne this week officially launched its TRU multiprotocol software, which its founder claims will improve the efficiency of storage systems.

The Israel-based newcomer spent six years developing Total Resource Utilization (TRU) software with the goal of eliminating bottlenecks caused by software that cannot keep up with faster storage media and network connectivity.

StorOne developers collapsed the storage stack into a single layer that is designed to support block (Fibre Channel and iSCSI), file (NFS, SMB and CIFS) and object (Amazon Simple Storage Service) protocols on the same drives. The company claims to support enterprise storage features such as unlimited snapshots per volume, with no adverse impact to performance.

TRU software is designed to run on commodity hardware and support hard disk drives; faster solid-state drives (SSDs); and higher performance, latency-lowering NVMe-based PCI Express SSDs on the same server. The software installs either as a virtual machine or a physical server.

StorOne CEO and founder Gal Naor said the TRU software-defined storage fits use cases ranging from high-performance databases to low-performance workloads, such as backup and data archiving.

‘Dramatically less resources’

“We need dramatically less resources to achieve better results. Results are the key here,” said Naor, whose experience in storage efficiency goes back to his founding of real-time compression specialist Storwize, which IBM acquired in 2010.

StorOne CTO Raz Gordon said storage software has failed to keep up with the speed of today’s drives and storage networks.

“We understood that the software is the real bottleneck today of storage systems. It’s not the drives. It’s not the connectivity,” said Gordon, who was the leading force behind the Galileo networking technology that Marvell bought in 2001.

The StorOne leaders are sparse on details so far about the product’s architecture and enterprise capabilities, beyond unlimited storage snapshots.

Marc Staimer, senior analyst at Dragon Slayer Consulting, said StorOne’s competition would include any software-defined storage products that support block and file protocols, hyper-converged systems, and traditional unified storage systems.

“It’s a crowded field, but they’re the only ones attacking the efficiency issue today,” Staimer said.

“Because of TRU’s storage efficiency, it gets more performance out of fewer resources. Less hardware equals lowers costs for the storage system, supporting infrastructure, personnel, management, power and cooling, etc.,” Staimer added. “With unlimited budget, I can get unlimited performance. But nobody has unlimited budgets today.”

StorOne user interface
TRU user interface shows updated performance metrics for IOPS, latency, I/O size and throughput.

Collapsed storage stack

The StorOne executives said they rebuilt the storage software with new algorithms to address bottlenecks. They claim StorOne’s collapsed storage stack enables the fully rated IOPS and throughput of the latest high-performance SSDs at wire speed.

“The bottom line is the efficiency of the system that results in great savings to our customers,” Gordon said. “You end up with much less hardware and much greater performance.”

StorOne claimed a single TRU virtual appliance with four SSDs could deliver the performance of a midrange storage system, and an appliance with four NVMe-based PCIe SSDs could achieve the performance and low latency of a high-end storage system. The StorOne system can scale up to 18 GBps of throughput and 4 million IOPS with servers equipped with NVMe-based SSDs, according to Naor. He said the maximum capacity for the TRU system is 15 PB, but he provided no details on the server or drive hardware.

“It’s the same software that can be high-performance and high-capacity,” Naor said. “You can install it as an all-flash array. You can install it as a hybrid. And you’re getting unlimited snapshots.”

Naor said customers could choose the level of disk redundancy to protect data on a volume basis. Users can mix and match different types of drives, and there are no RAID restrictions, he said.

StorOne pricing

Pricing for the StorOne TRU software is based on physical storage consumption through a subscription license. A performance-focused installation of 150 TB would cost 1 cent per gigabyte, whereas a capacity-oriented deployment of 1 PB would be $0.0006 per gigabyte, according to the company. StorOne said pricing could drop to $0.002 per gigabyte with multi-petabyte installations. The TRU software license includes support for all storage protocols and features.

StorOne has an Early Adopters Program in which it supplies free on-site hardware of up to 1 PB.

StorOne is based in Tel Aviv and also has offices in Dallas, New York and Singapore. Investors include Seagate and venture capital firms Giza and Vaizra. StorOne’s board of directors includes current Microsoft chairman and former Symantec and Virtual Instruments CEO John Thompson, as well as Ed Zander, former Motorola CEO and Sun Microsystems president.

Why device upgrade strategies fail

Ivan Pepelnjak, writing in IP Space, was asked by one of his readers about why software anchoring a device upgrade is still plagued by delays and bugs. In Pepelnjak’s view, the challenge stems from the networking industry’s long commitment to command-line interface and routing platforms built atop 30-year-old code.

With device upgrade and software rollouts, engineers are often split between two realities. In one camp, engineers “vote with their wallets” and invest in technology that supports automation, while in the other group, engineers cling to manual configuration and face holdups accommodating hundreds of routers at a time because they lack a gradual rollout for updates. “I never cease to be amazed at how disinterested enterprise networking engineers are about network automation. Looks like they barely entered the denial phase of grief while everyone else is passing them by left and right,” Pepelnjak wrote.

Dig deeper into Pepelnjak’s thoughts on device upgrade strategies and what steps engineers should take to improve them.

Where cybersecurity jobs fall the shortest

Last week, Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., blogged about the global cybersecurity skills shortage. This week, he revisited the topic, identifying the most acute shortfalls, using data compiled by ESG and the Information Systems Security Association. According to Oltsik, the top three areas where expertise is most lacking are security analysis and investigation, application security, and cloud security skills.

Survey respondents also pointed to concern about their organizations’ gap in skills such risk-compliance administration, security engineering and penetration testing. “The overall picture is bleak — many organizations may not have the right skills and resources to adequately secure new business and IT initiatives and may also lack ample skills to detect and respond to incidents in a timely fashion. Therefore, I keep coming back to two words — existential threat,” Oltsik wrote.

Read more of Oltsik’s thoughts on the cybersecurity skills shortage.

Juniper boosts Contrail for telcos

Zeus Kerravala, writing in ZK Research, gave high marks to Juniper Networks’ Contrail Cloud platform aimed at telcos. One plus: the platform’s tight integration with internal and third-party services and applications.

As a result, Contrail Cloud works easily with software from a number of sources, including network functions virtualization assurance through AppFormix; prevalidated virtualized network functions with Affirmed Networks as well as Juniper’s own vSRX virtual firewall, collaboration with Red Hat and end-to-end cloud management on behalf of customers.

Kerravala said in order to compete and offer services to enterprise customers, telcos must be able to exploit cloud architectures that support the rapid rollout of new services. “Juniper’s Contrail Cloud offerings takes much of the complexity out of the equation ensuring that telcos can meet the increasing demands of their business customers,” he wrote.

Explore more of Kerravala’s thoughts on Juniper Contrail.

Datos IO RecoverX backup gets table-specific

Datos IO RecoverX software, designed to protect scale-out databases running on public clouds, now allows query-specific recovery and other features to restore data faster.

RecoverX data protection and management software is aimed at application architects, database administrators and development teams. Built for nonrelational databases, it protects and recovers data locally and on software-as-a-service platforms.

Datos IO RecoverX works with scale-out databases, including MongoDB, Amazon DynamoDB, Apache Cassandra, DataStax Enterprise, Google Bigtable, Redis and SQLite. It supports Amazon Web Services, Google Cloud Platform and Oracle Cloud. RecoverX also protects data on premises.

RecoverX provides semantic deduplication for storage space efficiency and enables scalable versioning for flexible backups and point-in-time recovery.

More security, faster recovery in Datos IO RecoverX 2.5

The newly released RecoverX 2.5 gives customers the ability recover by querying specific tables, columns and rows within databases to speed up the restore process. Datos IO calls this feature “queryable recovery.” The software’s advanced database recovery function also includes granular and incremental recovery by selecting specific points in time.

The latest Datos IO RecoverX version also performs streaming recovery for better error-handling. The advanced database recovery capability for MongoDB clusters enables global backup of sharded or partitioned databases. The geographically dispersed shards are backed up in sync to ensure consistent copies in the recovery. Administrators can do local restores of the shards or database partitions to speed recovery.

RecoverX 2.5 also supports Transport Layer Security and Secure Sockets Layer encryptions, as well as X.509 certificates, Lightweight Directory Access Protocol authentication and Kerberos authentication.

With the granular recovery, you can pick and choose what you are looking for. That helps the time to recovery.
Dave Russelldistinguished analyst, Gartner

Dave Russell, distinguished analyst at Gartner, said Datos IO RecoverX 2.5 focuses more on greater control and faster recovery with its advanced recovery features.

“Some of these next-generation databases are extremely large and they are federated. The beautiful thing about databases is they have structure,” Russell said. “Part of what Datos IO does is leverage that structure, so you can pull up the [exact] data you are looking for. Before, you had to back up large databases, and in some cases, you had to mount the entire database to fish out what you want.

“With the granular recovery, you can pick and choose what you are looking for,” he said. “That helps the time to recovery.”

Peter Smails, vice president of marketing and business development at Datos IO, based in San Jose, Calif., said the startup is trying to combine the granularity of traditional backup with the visibility into scale-out databases that traditional backup tools lack.

“With traditional backup, you can restore at the LUN level and the virtual machine level. You can get some granularity,” Smails said. “What you can’t do is have the visibility into the specific construct of the database, such as what is in each row or column. We know the schema.

“Backup is not a new problem,” Smails said. “What we want to do through [our] applications is fundamentally different.”

Using Azure and AI to Explore the JFK Files

This post is by Corom Thompson, Principal Software Engineer at Microsoft.

On November 22nd, 1963, the President of the United States, John F. Kennedy, was assassinated. He was shot by a lone gunman named Lee Harvey Oswald while driving through the streets of Dallas in his motorcade. The assassination has been the subject of so much controversy that, 25 years ago, an act of Congress mandated that all documents related to the assassination be released this year. The first batch of released files has more than 6,000 documents totaling 34,000 pages, and the last drop of files contains at least twice as many documents. 

We’re all curious to know what’s inside them, but it would take decades to read through these. We approached this problem of gaining insights by using Azure Search and Cognitive Services to extract knowledge from this deluge of documents, using a continuous process that ingests raw documents, enriching them into structured information that enables you to explore the underlying data.

Today, at the Microsoft Connect(); 2017 event, we created the demo web site* shown in Figure 1 below – this is a web application that uses the AzSearch.js library and designed to give you interesting insights into this vast trove of information.

Figure 1 – JFK Files web application for exploring the released files

On the left you can see that the documents are broken down by the entities that were extracted from them. Already we know these documents are related to JFK, the CIA, and the FBI. Leveraging several Cognitive Services, including optical character recognition (OCR), Computer Vision, and custom entity linking, we were able to annotate all the documents to create a searchable tag index.

We were also able to create a visual map of these linked entities to demonstrate the relationships between the different tags and data. Below, in Figure 2, is the visualization of what happened when we searched this index for “Oswald”.

Figure 2 – Visualization of the entity linked mapping of tags for the search term “Oswald”

Through further investigation and linking, we were able to even identify that the entity linking Cognitive Service annotated this term with a connection to Wikipedia, and we quickly realized that the Nosenko who was identified in the documents was actually a KGB defector interrogated by the CIA, and these are audio tapes of the actual interrogation. It would have taken years to figure out these connections, but we were instead able to do this in minutes thanks to the power of Azure Search and Cognitive Services.

Another fun fact we learned is that the government was actually using SQL Server and a secured architecture to manage these documents in 1997, as seen in the architecture diagram in Figure 3 below.

Figure 3 – Architecture diagram from 1997 indicating SQL Server was used to manage these documents

We have created an architecture diagram of our own to demonstrate how this new AI-powered approach is orchestrating the data and pulling insights from it – see Figure 4 below.

This is the updated architecture we used to apply the latest and greatest Azure-powered developer tools to create these insightful web apps. Figure 4 displays this architecture using the same style from 54 years ago.

Figure 4 – Updated architecture of Azure Search and Cognitive Services

We’ll be making this code available soon, along with tutorials of how we built the solution – stay tuned for more updates and links on this blog.

Meanwhile, you can navigate through the online version of our application* and draw your own insights!


* Try typing a keyword into the Search bar up at the top of the demo site, to get started, e.g. “Oswald”.

Quobyte preps 2.0 Data Center File System software update

Quobyte’s updated Data Center File System software adds volume-mirroring capabilities for disaster recovery, support for Mac and Windows clients, and shared access control lists.

The startup, based in Santa Clara, Calif., this week released the 2.0 version of its distributed POSIX-compliant parallel file system to beta testers and expects to make the updated product generally available in January.

The Quobyte software supports file, block and object storage, and it’s designed to scale out IOPS, throughput and capacity linearly on commodity hardware ranging from four to thousands of servers. Policy-based data placement lets users earmark high-performance workloads to flash drives, including faster new NVMe-based PCIe solid-state drives.

Software-defined storage startups face challenges

Despite the additions, industry analysts question whether Quobyte has done enough to stand out in a crowded field of file-system vendors.

Marc Staimer, president of Dragon Slayer Consulting, said Quobyte faces significant hurdles against competition ranging from established giants, such as Dell EMC, to startups, including Elastifile, Qumulo, Rozo Systems, StorOne and WekaIO.

Staimer called features such as shared access control lists (ACLs) and volume mirroring table stakes in the software-defined storage market. He said mirroring — a technology that was hot 20 years ago — protects against hardware failures, but doesn’t go far enough for disaster recovery. He said Quobyte must consider adding versioning and time stamping to protect against software corruption, malware, accidental deletion and problems of that nature.

Steven Hill, a senior storage analyst at 451 Research, said it takes more than features to gain share in the enterprise storage market. He said Quobyte would do well to forge closer hardware partnerships to provide better integration, optimization, support and services.

“Even though software-delivered storage appears to be the trend, many storage customers still seem more interested in the fully supported hardware [and] software appliance model, rather than taking a roll-your-own approach to enterprise storage,  especially when there can be so many different production requirements in play at the same time,” Hill wrote in an email.

Quobyte CEO Bjorn Kolbeck and CTO Felix Hupfeld worked in storage engineering at Google before starting Quobyte in 2013. And Kolbeck claimed the “Google-style operations” that the Quobyte architecture enables would allow users to grow the system and run 24/7 without the need for additional manpower.

According to Kolbeck, fault tolerance is the most important enabler for Google-style operations. He said Quobyte achieves fault tolerance through automated replication, erasure coding, disaster recovery and end-to-end checksums that ensure data integrity. With those capabilities, users can fix broken hardware on their own schedules, he said.

“That’s the key to managing very large installations with hundreds of petabytes with a small team,” Kolbeck said.

Quobyte 2.0

Kolbeck said Quobyte made volume mirroring a priority following requests from commercial customers. The software uses continuous asynchronous replication across geographic regions and clouds to facilitate disaster recovery. Kolbeck said customers would be able replicate the primary site and use erasure coding with remote sites to lower the storage footprint, if they choose.

To expand data sharing across platforms and interfaces, Quobyte 2.0 finalized native drivers for Mac and Windows clients. Its previous version supported Linux, Hadoop and Amazon Simple Storage Service (S3) options for users to read, write and access files.

Kolbeck said adding access control lists will allow users to read and modify them from all interfaces now that Mac and Windows ACLs and S3 permissions map to Quobyte’s internal NFSv4 ACLs.

Quobyte also moved to simplify installation and management through the creation of a cloud-based service to assist with domain name system service configuration. Kolbeck said the company “moved as far away from the command line as possible,” and the system now can walk customers through the installation process.

Kolbeck said Quobyte currently has about 25 customers running the software in production. He said the company targets commercial high-performance computing and “all markets that are challenged by data growth,” including financial services, life sciences, exploratory data analysis and chip design, media and entertainment, and manufacturing and internet of things.

Quobyte’s subscription pricing model, based on usable capacity, will remain unchanged with the 2.0 product release.

Never mind the DevOps maturity model, focus on principles

There are few names in DevOps as big as Gary Gruver. He’s an experienced software executive with a knack for implementing continuous release and deployment pipelines in large organizations. In fact, he literally wrote the book on the subject. His latest, Starting and Scaling DevOps in the Enterprise, is an insightful and easy-to-read guide that breaks down DevOps principles by putting them all in a context enterprises can use to gain alignment on their journey to continuous delivery.

Gruver, president of Gruver Consulting, sat down with DevOpsAgenda to discuss the DevOps maturity model, core DevOps maturity principles, and how small and large organizations must take different paths on their DevOps journey.

What’s your take on the DevOps maturity model? Can that stymie DevOps adoption in large organizations?

Gary Gruver: A lot of people come out with these maturity models and say, ‘We had some success in DevOps, and now everybody has to do what we did.’

If you don’t start with the changes that are going to benefit people the most, you’re going to lose the momentum in your transformation.
Gary Gruverpresident at Gruver Consulting

And what I find when I go into different organizations, or even looking at different deployment pipelines within organizations, [is] that the things impacting productivity are fundamentally different. You look at a DevOps maturity model and it might claim, ‘You need to have Infrastructure as code, automated deployment, test automation, and this and that.’ I think that overlooks the actual problem each different deployment pipeline might have.

This is about organizational change management, and it’s about getting people to work in different ways. If you don’t start with the changes that are going to benefit people the most, you’re going to lose the momentum in your transformation.

Therefore, I think it’s important to start with DevOps principles so people can pick the changes that’ll make the biggest difference to them, so they will take ownership for implementing the changes into their organization.

How does scale affect success in DevOps?

Gruver: If you’re a small team and you have four or five developers, then DevOps is about just getting people to embrace and take ownership of code all the way out to the customer. It’s making sure the code is meeting the needs of customers and stable in production. Then, it’s responding to that feedback and taking ownership. It’s a lot about helping these developers become generalists and understanding the operations piece of this puzzle.

But if you have a tightly coupled system that requires thousands of people working together, then there aren’t that many people who are going to know the whole system and be able to support it in production. In these situations, someone needs to be responsible for designing how these complex systems come together and continually improve the process. It is going to require more specialists, because it is hard for everyone to understand the complexities of these large systems. The ways you coordinate five people are a lot different than coordinating a thousand.

Simon Wardley's Pioneer, Settlers and Town Planners model
Maturity models aren’t the only models of use. Organizations can turn to the Pioneers, Settlers and Town Planners model to reach DevOps efficiency in the best, most organic way for them.

What are some of these difficulties applying DevOps practices from small- to large-scale organizations?

Gruver: What I hear a lot of people in large organizations do with DevOps is they look at what the small teams are doing, and they try to replicate that. They try to reproduce and figure out how to make the small-team strategy work in a tightly coupled system, instead of really looking at the issues blocking them from releasing on a more frequent basis.

They’re not asking, ‘What are the ways we can address this waste and inefficiency and take it out of the system so we can release more frequently?’ They figure if they just do what the small teams are doing and try to replicate that and create a DevOps maturity model, by some magic, they’re going to be successful. Instead of doing that, they should focus on principles to figure out what’s going on in their system.

Large organizations should break it down as small as you possibly can, because smaller things are much easier to solve, maintain and manage. So, if you can break your system down into microservices and make that work, those teams are always going to be more efficient. That said, rearchitecting a large, tightly coupled system can be extremely complex and time-consuming, so it is typically not my first choice.

Additionally, there are a lot of DevOps practices that can be successfully implemented in large, tightly coupled systems. In fact, I would argue that applying DevOps principles in these complex systems will provide much greater benefits to the organization just because the inefficiencies associated with coordinating the work across large groups is so much more pronounced than it is with small teams.

Quorum OnQ solves Amvac Chemical’s recovery problem

Using a mix of data protection software, hardware and cloud services from different vendors, Amvac Chemical Corp. found itself in a cycle of frustration. Backups failed at night, then had to be rerun during the day, and that brought the network to a crawl.

The Los Angeles-based company found its answer with Quorum’s one-stop backup and disaster recovery appliances. Quorum OnQ’s disaster recovery as a service (DRaaS) combines appliances that replicate across sites with cloud services.

The hardware appliances are configured in a hub-and-spoke model with an offsite data center colocation site. The appliances perform full replication to the cloud that backs up data after hours.

“It might be overkill, but it works for us,” said Rainier Laxamana, Amvac’s director of information technology.

Quorum OnQ may be overkill, but Amvac’s previous system underwhelmed. Previously, Amvac’s strategy consisted of disk backup to early cloud services to tape. But the core problem remained: failed backups. The culprit was the Veritas Backup Exec applications that the Veritas support team, while still part of Symantec, could not explain. A big part of the Backup Exec problem was application support.

“The challenge was that we had different versions of an operating system,” Laxamana said. “We had legacy versions of Windows servers so they said [the backup application] didn’t work well with other versions.

“We were repeating backups throughout the day and people were complaining [that the network] was slow. We repeated backups because they failed at night. That slowed down the network during the day.”

We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.
Rainier Laxamanadirector of information technology, Amvac

Quorum OnQ provides local and remote instant recovery for servers, applications and data. The Quorum DRaaS setup combines backup, deduplication, replication, one-click recovery, automated disaster recovery testing and archiving. Quorum claims OnQ is “military-grade” because it was developed for U.S. Naval combat systems and introduced into the commercial market in 2010.

Amvac develops crop protection chemicals for agricultural and commercial purposes. The company has a worldwide workforce of more than 400 employees in eight locations, including a recently opened site in the Netherlands. Quorum OnQ protects six sites, moving data to the main data center. Backups are done during the day on local appliances. After hours, the data is replicated to a DR site and then to another DR site hosted by Quorum.

“After the data is replicated to the DR site, the data is replicated again to our secondary DR site, which is our biggest site,” Laxamana said. “Then the data is replicated to the cloud. So the first DR location is our co-located data center and the secondary DR our largest location. The third is the cloud because we use Quorum’s DRaaS.”

Amvac’s previous data protection configuration included managing eight physical tape libraries.

“It was not fun managing it,” Laxamana said. “And when we had legal discovery, we had to go through 10 years of data. We kept tapes at Iron Mountain, but it became very expensive so we brought it on premises.”

Laxamana said he looked for a better data protection system for two years before finding Quorum. Amvac looked at Commvault but found it too expensive and not user-friendly enough. Laxamana and his team also looked at Unitrends. At the time, Veeam Software only supported virtual machines, and Amvac needed to protect physical servers. Laxamana said Unitrends was the closest that he found to Quorum OnQ.

“The biggest (plus) with Quorum was that the interface was much more user-friendly,” he said. “It’s more integrated. With Unitrends, you need a third party to integrate the Microsoft Exchange.”

Remediation engine to improve Nyansa Voyance network monitoring

Network analytics company Nyansa Inc. has introduced more powerful software that spotlights problems in infrastructure devices and recommends corrective actions to prevent degradation in service.

Nyansa unveiled its “remediation engine” this week as the latest addition to the company’s Voyance performance monitor for wired and wireless networks. The Nyansa Voyance system, launched last year, blends cloud-based analytics and real-time deep packet inspection with an easy-to-understand management console.

The new software — part of a Voyance upgrade — will flag the cause of trouble and recommend configuration changes to correct it. For example, the application could recommend turning off 2.4GHz radios or changing channel assignments to reduce co-channel interference on wireless access points in a specific area.

The remediation engine also calculates the benefits of the corrective action. In the example above, the software would measure the number of lost client hours avoided through the fix.

More data fed to Nyansa Voyance

Nyansa has increased the number of data sources feeding the Voyance system to improve its analytic capabilities. The latest iteration can ingest syslog data from Cisco’s Identity Services Engine, Aruba’s ClearPass and the open source network access protocol, FreeRADIUS. The three technologies provide secure access to network resources through authentication, authorization and accounting of devices.

Along with more data coming in, Voyance can send more data out. Nyansa has added RESTful APIs for sending network information to an IT workflow application, such as team messaging service Slack or IT service management system ServiceNow. The latter could, for example, generate a trouble ticket and send it to IT when Voyance finds a device configuration problem.

Being able to reach network managers before there’s an outage enables them to become more proactive in solving problems, said Zeus Kerravala, the principal analyst at ZK Research. “IT can be on top of the problem instead of always in reactive mode.”

Nyansa adds remediation engine in Voyance upgrade
Nyansa Voyance recommendations for fixing network performance troubles

To help improve IT response time further, Nyansa has made it possible for Voyance users to tag mission-critical devices attached to an IP network. The devices could include heart or infusion pumps used in healthcare or robots found on the manufacturing assembly line. Voyance would measure and track every network transaction on the machinery and alert IT workers when performance-damaging events occur.

Nyansa is providing the latest features at no additional cost to Voyance customers, which include Netflix, Tesla Motors and Uber.

The company markets Nyansa Voyance as simplifying network monitoring by replacing the multiple tools IT managers use to determine the network’s health. Enterprise Management Associates Inc., a research firm in Boulder, Colo., has found today’s IT manager has six to 10 different management tools in use at one time.

Nyansa competitors include NetScout Systems Inc.; Cisco, which offers AppDynamics; and Hewlett Packard Enterprise, which has Aruba IntroSpect.

ScaleArc brings database load balancing to Azure SQL DB

ScaleArc’s recent database load-balancing software release, ScaleArc for SQL Server, is now integrated with Microsoft’s Azure SQL Database.

ScaleArc for SQL Server, released on Sept. 19, was designed to help businesses using Microsoft Azure SQL Database “get data and apps to the cloud faster,” said Justin Barney, president and CEO of ScaleArc, a database load-balancing software company founded in 2009 and based in Santa Clara, Calif. The product became available on the Google Cloud Platform as of Oct. 12, 2017.

This product update takes a look at what the new ScaleArc for SQL Server software does and explores its benefits for businesses and developers, as well as how a Microsoft Data Platform MVP uses it.

What it does

Justin Barney, CEO of ScaleArcJustin Barney

ScaleArc for SQL Server load-balancing software automates the routing of traffic into a database from application servers, according to Barney. ScaleArc database load-balancing software can direct traffic into the database on behalf of the application, enabling apps to use underlying database functionality without programming database logic into the app.

Barney noted that users of Azure SQL and Microsoft Azure cloud database as a service can use ScaleArc to support failover between cloud regions and reduce the latency common in hybrid cloud deployments. For developers and architects, he said, ScaleArc simplifies maintaining continuous availability and performance of apps in cloud deployments, while reducing code changes.

David Klee, founder of Heraflux TechnologyDavid Klee

David Klee, a Microsoft Data Platform MVP and founder of Heraflux Technologies, an IT consulting firm in Scarborough, Maine, has found that ScaleArc for SQL Server enables adding high availability as applications are scaled, which he’d found difficult to do before.

“ScaleArc is basically software that you either click through on a cloud and use, or it’s a software appliance that you download and deploy locally,” Klee said. With it, the user abstracts applications from the data tier, avoiding database logic in the application code. “So, when you go to your app, you just point it to the ScaleArc instance instead of the database. That’ll get you up and running.”

Why it’s cool

With ScaleArc, you can essentially scale out your database workload without making any application changes.
David Kleefounder, Heraflux Technologies

Optimizing application and data workflow performance via the cloud is more efficient and cost-effective than the traditional path of buying more hardware, Klee said. On a recent SQL Server project, his client had found a slow database problem was not fixed by spending $1.5 million on hardware. So, Klee said, the company had to look at options, such as changing the application and praying that the replacement is better. Instead, he suggested using ScaleArc for SQL Server. “With ScaleArc, you can essentially scale out your database workload without making any application changes,” Klee said.

For software engineers, Barney said, ScaleArc’s database load-balancing tool can eliminate a lot of the coding work that’s needed to utilize modern cloud databases. “So, developers can focus on writing really good code with really good queries and letting the ScaleArc software manage the traffic in and out of the database environment,” he said.
Other ScaleArc for SQL Server functionalities developers are using are global connection pooling and connection management, as well as caching. “The ability to cache queries and result sets without having to do it from the application code saves time and costs,” Barney said. Just click on a particular query pattern, and the tool automatically turns it into a cache rule and starts caching it. “That has the ability to significantly boost performance,” he said.

ScaleArc’s cool factor for native cloud application development is eliminating the need to program database logic into the application, according to Barney. Enterprise developers can also use the new ScaleArc for SQL Server release with Microsoft Azure Active Directory for user authentication.

Summing up, Klee said finding ScaleArc allowed his team to scale a database workload outward, a strategy they haven’t been able to do with any similar tool. “There’s hardware storage acceleration and some in-memory constructs, but there’s really nothing else out there that has worked for us.”

ScaleArc Azure SQL DB

What it costs

The pricing of ScaleArc for SQL Server varies with database capacity and starts at $40,000 for ScaleArc capacity to front-end 32 SQL Server databases.

Blackbaud and Microsoft to strengthen strategic partnership to digitally transform the nonprofit sector – News Center

Social good software leader Blackbaud bets big on Microsoft Azure as the two companies plan to go deeper on integrations, innovation and sector leadership to scale global good

BALTIMORE — Oct. 18, 2017 As part of bbcon 2017, Blackbaud (Nasdaq: BLKB), the world’s leading cloud software company powering social good, and Microsoft Corp. (Nasdaq: MSFT), plan to expand their partnership in support of their mutual goals to digitally transform the nonprofit sector.

The nonprofit sector represents the third largest workforce behind retail and manufacturing in the United States with approximately 3 million organizations globally. Blackbaud, the largest vertical cloud software provider in the space, announced its intention to fully power its social good-optimized cloud, Blackbaud SKY™, with Microsoft Azure. The two companies highlighted a three-point commitment to collaboration for the good of the global nonprofit community. This announcement comes just days after Microsoft launched its new Tech for Social Impact Group, which is dedicated to accelerating technology adoption and digital transformation with the nonprofit industry to deliver greater impact on the world’s most critical social issues.

“This newly expanded partnership between Microsoft and Blackbaud will allow both companies to better meet the unique technology challenges nonprofits face,” said Justin Spelhaug, general manager of Microsoft Tech for Social Impact. “By combining Microsoft’s cloud platforms and expertise with Blackbaud’s leading industry solutions, we will create new opportunities for digital transformation to empower nonprofits to make an even bigger impact on the world.”

“The nonprofit community plays a vital role in the health of the entire social economy, and we’ve been working for more than three decades to help these inspiring organizations achieve big, bold mission outcomes,” said Mike Gianoni, president and CEO of Blackbaud. “For nearly that long we’ve also been a Microsoft partner, and we’re incredibly enthusiastic about forging new ground together as we tackle some of the most pressing issues nonprofits face. Both companies couldn’t be more committed to this space, so the nonprofit community should expect great things from this expanded partnership.”

The newly expanded partnership between Microsoft and Blackbaud will focus on three key areas:

Deeper integration between Microsoft and Blackbaud solutions, with Blackbaud’s cloud platform for social good, Blackbaud SKY, powered by Microsoft Azure

Blackbaud has been developing on the Microsoft stack for over three decades. As a leading Global ISV Partner, Blackbaud is already one of Microsoft’s top Azure-based providers. Today, Blackbaud announced its intention to fully power Blackbaud SKY™, its high-performance cloud exclusively designed for the social good community, in Microsoft’s Azure environment.

“Blackbaud’s expanded Azure commitment will be one of the most significant partner bets on Microsoft’s hyperscale cloud, and the most significant to transform the social good space,” Spelhaug said. “We often highlight the engineering work behind Blackbaud SKY™, because it demonstrates the power of Microsoft Azure and the kind of forward-looking innovation and leadership that the nonprofit sector greatly needs.”

Details of the investment are not publicly available but the companies plan to share more about the partnership in coming months. Blackbaud also announced its plans to become a CSP (Cloud Solution Provider) partner for the Microsoft platform, simplifying the purchase, provisioning and management of Blackbaud and Microsoft cloud offerings. For nonprofits that want the security, power and flexibility of the cloud plus the services and support of a trusted solution provider that deeply understands their unique needs, Blackbaud will be able to deliver both Microsoft and Blackbaud solutions through a unified purchase experience.

A commitment to pursuing best-in-class nonprofit cloud solutions that bring together the best of both companies’ innovation for a performance-enhanced experience for nonprofits — from funding, to mission operations, to program delivery

Blackbaud and Microsoft plan to pursue innovative ways to fully harness the power, security and reliability of Microsoft’s Azure-Powered solutions (e.g., Office 365, Dynamics) and Blackbaud’s industry-leading, outcome-focused solutions that cater specifically to the unique workflow and operating model needs of nonprofits — all with the goal of improving nonprofit performance across the entire mission lifecycle.

This includes exploring how both companies’ respective cloud artificial intelligence (AI) and analytics innovations can be leveraged in new ways to drive even greater sector impact.

“There is massive opportunity to empower the nonprofit community through creative tech innovation,” said Kevin McDearis, chief products officer at Blackbaud. “Every 1 percent improvement in fundraising effectiveness makes $2.8 billion available for frontline program work. This is just one example of the type of impact Blackbaud focuses on with our workflows and embedded intelligence, and we couldn’t be more thrilled to team up with Microsoft to push into new areas of innovation that move the sector forward, faster.”

Joint sector leadership initiatives that make innovation, research and best practices more accessible to nonprofits around the world

Nonprofits are addressing some of the world’s most complicated issues. As shared value companies, Microsoft and Blackbaud share a commitment to helping nonprofits meet those needs. Microsoft is globally known for its unmatched philanthropic reach and impact. And Blackbaud, which exclusively builds software for social good, invests more in R&D and best-practice-driven research for global good than any technology provider. Both companies were among just 56 companies named to the Fortune 2017 Change the World list.

Together, Microsoft and Blackbaud intend to partner on initiatives that make innovation more accessible for nonprofits large and small, while also exploring ways the companies’ data assets, community outreach and sector leadership can be synergistically and responsibly applied to improve the effectiveness and impact of the entire nonprofit community.

Microsoft and Blackbaud will share further details in the coming months. Learn more about Microsoft’s Technology for Social Impact Group here. Visit www.Blackbaud.com for more on Blackbaud.

About Blackbaud

Blackbaud (NASDAQ: BLKB) is the world’s leading cloud software company powering social good. Serving the entire social good community—nonprofits, foundations, corporations, education institutions, healthcare institutions and individual change agents—Blackbaud connects and empowers organizations to increase their impact through software, services, expertise, and data intelligence. The Blackbaud portfolio is tailored to the unique needs of vertical markets, with solutions for fundraising and CRM, marketing, advocacy, peer-to-peer fundraising, corporate social responsibility, school management, ticketing, grantmaking, financial management, payment processing, and analytics. Serving the industry for more than three decades, Blackbaud is headquartered in Charleston, South Carolina and has operations in the United States, Australia, Canada and the United Kingdom. For more information, visit www.blackbaud.com.

About Microsoft

Microsoft (Nasdaq “MSFT” @microsoft) is the leading platform and productivity company for the mobile-first, cloud-first world, and its mission is to empower every person and every organization on the planet to achieve more.

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777, rrt@we-worldwide.com

Nicole McGougan, Public Relations Manager for Blackbaud, (843) 654-3307, media@blackbaud.com