The post Excel adds AI capabilities to now recognize rich data types beyond numbers and text strings appeared first on Stories.
The post Microsoft updates its Quantum Development Kit and adds support for Linux and Mac appeared first on Stories.
The post Be My Eyes app adds tech support from Microsoft Disability Answer Desk appeared first on Stories.
SwiftStack Inc.’s new 6.0 product release adds Universal Access capabilities to enable customers to read and write files to object storage in private or public clouds without a gateway.
The San Francisco-based software vendor originally gained a following through its commercially supported version of open source OpenStack Swift object storage. But SwiftStack object storage has steadily added capabilities and, with the version 6 release, the startup now refers to its product as “multi-cloud data management” that provides a “cloud-native” single namespace for unstructured data.
SwiftStack object storage always supported the OpenStack Swift and Amazon S3 APIs. With its 2.0 product release, SwiftStack added a gateway to enable users to put file data into an object storage system via API and take it out via file, or vice versa, noted Mario Blandini, the company’s vice president of marketing.
“The reality is, no one used our file system gateway because what they really wanted is it to be as fast as their NAS and as cool as their NAS but then cheap as in object storage,” Blandini said. “Architecturally, a gateway could not delight our customers.”
Integrated support for SMB/NFS file protocols
SwiftStack’s Universal Access now enables users or applications to access unstructured data from any private or public cloud location through the SMB and NFS file protocols and Amazon S3 and Swift object interfaces. The system can read and write data to a cloud-based single namespace in both formats. For instance, it can ingest data via file and read via object, or vice versa.
“Any workflow comprised of any number of parts works, as long as the file interfaces are SMB or NFS, and the object interfaces are Swift or S3,” Blandini said.
Mario Blandinivice president of marketing, SwiftStack
Combining Universal Access with SwiftStack’s previously released Cloud Sync capabilities enables IT managers to control the placement of data in private or public clouds based on policies tailored to specific application workloads and facilitate multiprotocol access to the information. Blandini said the true benefit is being able to “put the right stuff in the right place at the right time without having your users do it — having your IT governance control where the data is placed.”
He said the new capabilities would enable SwiftStack, for the first time, to “ask people to please stop thinking of us as an open source company,” and “while you’re at it, if you could try not to label us as an object storage company, that’d be even better, because at the end of the day, no one cares about object storage.”
“When people write to a public cloud, they don’t care that it’s object storage,” Blandini said. “One of the things that’s made object storage elusive for most users is the fact that it’s been made up to be way more complicated than it needs to be. With cloud-first initiatives coming from CIOs and the mandate to provide DR and site recovery for a lot of businesses who can’t afford a second data center, we’re seeing a lot more momentum going to these things because it’s practical to do now.”
George Crump, founder and president of Storage Switzerland LLC, said SwiftStack’s Universal Access provides “some feature uniqueness that nobody else at least at this point has delivered.” But he said it’s probably not the one feature by itself that could push SwiftStack over the edge to significant market share.
“They have really good technology. Now it comes down to can they market,” Crump said. “I’d say the jury is out at this point.”
Howard Marks, founder and chief scientist at DeepStorage LLC, said SwiftStack’s pioneering work to have a single system that facilitates access to the same data via file and object APIs means developers won’t have to rewrite file-based applications for object storage paradigms and can write new applications to the S3 object API without having to worry about support for file APIs.
“It certainly opens up a new market” for SwiftStack, Marks said. “Their market before had been people building object storage for cloud-type applications. They open it up to the people who have applications using files now that want to make the transition to object and use that as their transition to a cloud strategy.”
Stiff competition for SwiftStack object storage
Marks noted that SwiftStack object storage faces stiff competition in a busy market populated with well-established vendors, startups and open source options such as Ceph. He said the company is taking the right approach in de-emphasizing its OpenStack Swift roots.
“The general-purpose object market is way bigger than OpenStack, and they don’t want to be ghettoized,” Marks said. “OpenStack is starting to get the smell of failure on it. People are starting to look down on OpenStack.”
Torsten Volk, a senior analyst at Enterprise Management Associates, said SwiftStack version 6 could serve as a complement to traditional NAS. “For latency-sensitive use cases, traditional NAS can stay in place. However, you could use SwiftStack to get more mileage out of existing filers by moving off the less demanding data,” Volk wrote in an email.
Volk said SwiftStack’s software could also be helpful for container users. “Containers notoriously are fighting with data mapping. SwiftStack gives them API access so that you don’t have to worry about Kubernetes storage drives or plug-ins,” he wrote.
Quobyte’s updated Data Center File System software adds volume-mirroring capabilities for disaster recovery, support for Mac and Windows clients, and shared access control lists.
The startup, based in Santa Clara, Calif., this week released the 2.0 version of its distributed POSIX-compliant parallel file system to beta testers and expects to make the updated product generally available in January.
The Quobyte software supports file, block and object storage, and it’s designed to scale out IOPS, throughput and capacity linearly on commodity hardware ranging from four to thousands of servers. Policy-based data placement lets users earmark high-performance workloads to flash drives, including faster new NVMe-based PCIe solid-state drives.
Software-defined storage startups face challenges
Despite the additions, industry analysts question whether Quobyte has done enough to stand out in a crowded field of file-system vendors.
Marc Staimer, president of Dragon Slayer Consulting, said Quobyte faces significant hurdles against competition ranging from established giants, such as Dell EMC, to startups, including Elastifile, Qumulo, Rozo Systems, StorOne and WekaIO.
Staimer called features such as shared access control lists (ACLs) and volume mirroring table stakes in the software-defined storage market. He said mirroring — a technology that was hot 20 years ago — protects against hardware failures, but doesn’t go far enough for disaster recovery. He said Quobyte must consider adding versioning and time stamping to protect against software corruption, malware, accidental deletion and problems of that nature.
Steven Hill, a senior storage analyst at 451 Research, said it takes more than features to gain share in the enterprise storage market. He said Quobyte would do well to forge closer hardware partnerships to provide better integration, optimization, support and services.
“Even though software-delivered storage appears to be the trend, many storage customers still seem more interested in the fully supported hardware [and] software appliance model, rather than taking a roll-your-own approach to enterprise storage, especially when there can be so many different production requirements in play at the same time,” Hill wrote in an email.
Quobyte CEO Bjorn Kolbeck and CTO Felix Hupfeld worked in storage engineering at Google before starting Quobyte in 2013. And Kolbeck claimed the “Google-style operations” that the Quobyte architecture enables would allow users to grow the system and run 24/7 without the need for additional manpower.
According to Kolbeck, fault tolerance is the most important enabler for Google-style operations. He said Quobyte achieves fault tolerance through automated replication, erasure coding, disaster recovery and end-to-end checksums that ensure data integrity. With those capabilities, users can fix broken hardware on their own schedules, he said.
“That’s the key to managing very large installations with hundreds of petabytes with a small team,” Kolbeck said.
Kolbeck said Quobyte made volume mirroring a priority following requests from commercial customers. The software uses continuous asynchronous replication across geographic regions and clouds to facilitate disaster recovery. Kolbeck said customers would be able replicate the primary site and use erasure coding with remote sites to lower the storage footprint, if they choose.
To expand data sharing across platforms and interfaces, Quobyte 2.0 finalized native drivers for Mac and Windows clients. Its previous version supported Linux, Hadoop and Amazon Simple Storage Service (S3) options for users to read, write and access files.
Kolbeck said adding access control lists will allow users to read and modify them from all interfaces now that Mac and Windows ACLs and S3 permissions map to Quobyte’s internal NFSv4 ACLs.
Quobyte also moved to simplify installation and management through the creation of a cloud-based service to assist with domain name system service configuration. Kolbeck said the company “moved as far away from the command line as possible,” and the system now can walk customers through the installation process.
Kolbeck said Quobyte currently has about 25 customers running the software in production. He said the company targets commercial high-performance computing and “all markets that are challenged by data growth,” including financial services, life sciences, exploratory data analysis and chip design, media and entertainment, and manufacturing and internet of things.
Quobyte’s subscription pricing model, based on usable capacity, will remain unchanged with the 2.0 product release.
The latest version of Data Dynamics’ StorageX file management software adds a single-view analysis portal, support for Amazon S3 API-compliant object storage and expanded application programming interfaces for DevOps environments.
With the StorageX 8.0 release, the Teaneck, N.J., software vendor continues to extend the product’s capabilities beyond the original data migration focus. Data Dynamics StorageX enables users to set policies and move files from one storage system to another without a gateway, file virtualization, global namespace, stubbing, sharding or agents.
Newly added support for Amazon’s Simple Storage Service (S3) API — which has become the de facto standard for object storage — lets customers move files from an NFS or SMB layer to cloud-based object storage, such as Amazon cloud storage, NetApp StorageGrid, IBM Cloud Object Storage and other S3-compliant object stores.
Data Dynamics StorageX stores file natively in the S3 object format, enabling users to attach metadata tags to ease search and management. The native S3 object format also lets applications running in Amazon directly use and work on data that StorageX 8.0 has migrated. The updated StorageX 8.0 includes a new portal to enable users to retrieve S3-archived data in an SMB file format.
File analysis capabilities
StorageX 8.0’s new analysis capabilities rely on the product’s universal data engine (UDE) to collect information on a file’s owner, size, type, access, modification and other metrics to help users make more informed decisions on data management and migration. The UDE can run on the same server as StorageX or on virtual machines (VMs) in local data center servers with management rights to access the storage, according to Data Dynamics CEO Piyush Mehta.
Piyush MehtaCEO, Data Dynamics
“What we’ve found through our experience over the last five years is that customers just don’t know what they have,” Mehta said. “If you don’t know what you have, how are you making a business decision on what to do with the data?”
After analyzing the data, StorageX customers can use the product’s VM-based central console to set policies to enable the UDE to manage and move the data out of band. Mehta said the product is most useful in storage environments in excess of 100 TB, and the majority of the company’s 80 customers have multi-petabyte deployments across multiple data centers and geographic regions.
Mehta said all functionality provided through the Data Dynamics StorageX web-based user interface is accessible via API for management from a DevOps standpoint. An application developer can write to the StorageX APIs and gain access to the file management features and automate them based on application needs.
“You can actually integrate an application to manage storage using us as a middleware to then downstream manage the data within those storage tiers,” Mehta said.
Data Dynamics acquired the StorageX technology from Brocade Communications Systems in 2012. The company’s 80 customers span industries such as financial, pharmaceutical, telecommunications, media, oil and gas, and consumer retail, according to Mehta. He said many use StorageX with storage from NetApp, Dell EMC, IBM, Amazon S3 and traditional file servers with direct-attached storage.
Data Dynamics competitors
Mehta said competitors in Data Dynamics’ management space include Avere, Panzura and Primary Data. But he claimed they use gateways, stubbing, sharding or other mechanisms that tie customers to their products to gain access to data.
“If you go with a Primary Data or ioFabric, once you allow something else to start moving your data, generally you need to keep that other [product] running so you can get to your data,” said George Crump, founder and president of Storage Switzerland LLC.
Crump said Data Dynamics StorageX doesn’t do “all the things that would set up a transparent recall,” as some of the other products do. He said StorageX moves data and then essentially gets out of the way.
Scott Sinclair, a senior analyst at Enterprise Strategy Group Inc. in Milford, Mass., said the analysis portal and native S3 object support represent another key shift for Data Dynamics StorageX, as the product expands beyond its file migration roots.
“They’re assisting beyond just the move to give you a better understanding of what type of content you have. This functionality is incredibly important when you’re looking at a hybrid cloud environment,” Sinclair said. “With the cloud, you really want to know what you’re moving because of security, compliance, performance requirements and anything else [to get] a better idea of what stays on premises versus off.”
Pricing for Data Dynamics StorageX 8.0’s new and existing capabilities is as follows:
- $40 per terabyte for analytics;
- $72 per terabyte for replication;
- $100 per terabyte for file archive — SMB or NFS conversion to S3;
- $200 per terabyte for file retrieval — S3 conversion to SMB or NFS;
- $150 per terabyte for application modernization and transform file to S3; and
- $500 per terabyte for file migration, for one-time use, security transformation and file-system restructuring.
attackers another avenue to break in and wreak havoc on the enterprise.
Multifactor authentication (MFA) adds another layer of security to protect organizations against data breaches — even when passwords fall into the wrong hands. Because of the added exposure that comes with using a cloud service, more administrators have established an Office 365 MFA setup to prevent outsiders from gaining access.
Microsoft added support for multifactor authentication to Azure Active Directory (AD) PowerShell, version 1.0, in 2015; however, the feature lacked the ability to connect to other Office 365 services with MFA-enabled accounts. The company updated the module to version 2.0 in 2017 to provide that functionality. Each Office 365 service requires its own module, listed below:
Complications with PowerShell modules
The Office 365 MFA update helps secure access to cloud services, but there is a downside. Because every service has its own module, they do not share tokens. When an authorization against Exchange Online with an MFA-enabled account occurs, the other modules cannot reuse the authorization token. To connect to other services, the authorization process must repeat. Organizations with more service-oriented management roles likely will not encounter this issue; for example, some Exchange administrators are more likely to connect only to Exchange Online and occasionally to Azure Active Directory.
Additionally, the newer module’s cmdlets used to connect to the Office 365 service with an MFA-enabled account differ from their regular cmdlet counterparts — and the options are inconsistent. This is contrary to version 1.0 of the Azure Active Directory module, which allows the use of the same cmdlet (Connect-MsolService) for both MFA and non-MFA accounts. Version 1.0 also permits the specification of the Credentials parameter in both scenarios, after which the module checks if additional MFA authentication is required.
Here is a short overview of the cmdlets that connect to various Office 365 services with non-MFA-enabled and MFA-enabled accounts:
Skype for Business Online
New-CsOnlineSession -Username $UserID
Azure Active Directory, module version 1.x
Connect-MsolService -Credential $Credential
Azure Active Directory, module version 2.x
Connect-AzureAD -Credential $Credential
As the table shows, it’s not always an option to provide credentials directly to the module. If an administrator omits the Credentials parameter when connecting to Skype for Business Online, for example, the Office 365 MFA authentication process gets triggered. Using that same module, it can be confusing when a logon fails after specifying the Credential parameter for an MFA-enabled account. There isn’t an ability to provide additional session options through the Exchange Online MFA PowerShell module, such as timeout settings or proxy configuration. For the proxy, the module uses the system Internet Explorer configuration.
Another item to be aware of: If the PowerShell session times out, you need to fully reconnect and go through MFA approval process again. With version 1.0 of the Azure Active Directory module, PowerShell would just reconnect with the cached credentials when entering a cmdlet.
Script simplifies the Office 365 MFA connection process
It can be difficult to remember how to connect to different Office 365 services, especially with newer MFA authentication variations. To make it easier, add the following script to the PowerShell profile.
The script detects the installed modules and shares a link to download any missing modules. It also detects MFA-supported modules and prompts for credentials, and asks if the module should use MFA when authenticating.
Microsoft enhances other authentication offerings
Microsoft also made other authentication-focused updates. The company continues to improve the Authenticator app, which can authorize MFA requests for non-Microsoft accounts, such as Facebook and WordPress. More recently, Microsoft added a feature called Phone Sign-In, which lets a trusted device approve access to Microsoft accounts. This streamlines the authentication process and enables users to approve or deny authorization requests with their phones.
Powered by WPeMatico