Tag Archives: open

Return of Bleichenbacher: ROBOT attack means trouble for TLS

A team of security researchers discovered eight leading vendors and open source projects whose implementations of the Transport Layer Security protocol are vulnerable to the Bleichenbacher oracle attack, a well-known flaw that was first described in 1998.

The Bleichenbacher attack has been referenced in all IETF specifications for the Transport Layer Security (TLS) protocol since version 1.0 in 1999, and implementers of TLS versions through 1.2 were warned to take steps to avoid the Bleichenbacher attack. However, the researchers noted that, based on the ease with which they were able to exploit the vulnerability, it appears that many implementers ignored the warnings.

The attack is named after its discoverer, Daniel Bleichenbacher, a Swiss cryptographer who was working for Bell Laboratories in 1998 when his research on the vulnerability was first published. The TLS protocol, which was meant to replace the Secure Sockets Layer, is widely used for encryption and the authentication of web servers.

The research team  included Hanno Bock, information security researcher; Juraj Somorovsky, research associate at the Horst Görtz Institute for IT Security at the Ruhr-Universität Bochum in Germany; and Craig Young, , computer security researcher with Tripwire’s Vulnerability and Exposures Research Team (VERT). “Perhaps the most surprising fact about our research is that it was very straightforward,” the researchers wrote. “We took a very old and widely known attack and were able to perform it with very minor modifications on current implementations. One might assume that vendors test their TLS stacks for known vulnerabilities. However, as our research shows in the case of Bleichenbacher attacks, several vendors have not done this.”

The researchers said many web hosts are still vulnerable to the ROBOT attack and that nearly a third of the top 100 sites in the Alexa Top 1 Million list are vulnerable. The team identified vulnerable products from F5, Citrix, Radware, Cisco, Erlang, and others, and “demonstrated practical exploitation by signing a message with the private key of facebook.com’s HTTPS certificate.”

The researchers described their work as the “Return Of Bleichenbacher’s Oracle Threat” (ROBOT) and published it in a paper of the same title, as well as on a branded vulnerability website. The team also published a capture the flag contest, posting an encrypted message and challenging the public to decrypt the message using the strategies described in the paper.

TLS protocol designers at fault

The researchers placed the blame for the ease of their exploits squarely on the shoulders of TLS protocol designers. The ROBOT attack is made possible by the behavior of servers implementing TLS using the RSA Public-Key Cryptography Standards (PKCS) #1 v1.5 specification; the issues that enable the Bleichenbacher attack are fixed in later versions of PKCS. TLS 1.3, which is expected to be finalized soon, deprecates the use of PKCS #1 v1.5 and specifies use of PKCS #1 v2.2.

The TLS protocol designers absolutely should have been more proactive about replacing PKCS#1 v1.5.
Craig Youngcomputer security researcher, Tripwire VERT

“The TLS protocol designers absolutely should have been more proactive about replacing PKCS#1 v1.5. There is an unfortunate trend in TLS protocol design to continue using technology after it should have been deprecated,” Young told SearchSecurity by email. He added that vendors also “should have been having their code audited by firms who specialize in breaking cryptography since most software companies do not have in-house expertise for doing so.”

TLS as currently deployed ignores improperly formatted data, and as described in 1999 in RFC 2246. “The TLS Protocol Version 1.0,” the original specification for TLS 1.0, the ROBOT attack “takes advantage of the fact that by failing in different ways, a TLS server can be coerced into revealing whether a particular message, when decrypted, is properly PKCS #1 formatted or not,” the RFC 2246 document states.

The solution proposed in that specification for avoiding “vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incorrectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.”

Potential for attacks, detection and remediation

The researchers noted in the paper that the ROBOT flaw could lead to very serious attacks. “For hosts that are vulnerable and only support RSA encryption key exchanges it’s pretty bad. It means an attacker can passively record traffic and later decrypt it,” the team wrote on the ROBOT website, adding that “For hosts that usually use forward secrecy, but still support a vulnerable RSA encryption key exchange the risk depends on how fast an attacker is able to perform the attack. We believe that a server impersonation or man in the middle attack is possible, but it is more challenging.”

Young said that it might be possible to detect attempts to abuse the Bleichenbacher vulnerability, but it would not be easy. “This attack definitely triggers identifiable traffic patterns. Servers would observe a high volume of failed connections as well as a smaller number of connections with successful handshakes and then little to no data on the connection,” he told SearchSecurity. “Unfortunately, I am unaware of anybody actually doing this. Logging the information needed to detect this can be cumbersome and for a site receiving a billion connections a second, it could be quite difficult to notice 10-100 thousand failed connections.”

As for other, ongoing risks, Young said that while “PKCS#1 v1.5 is not being used in TLS 1.3 but it is still used in other systems like XML encryption. Whether or not it can be disabled through configuration is highly application specific.”

CORD project updates platform to support edge computing

The Open Networking Foundation this week upgraded the Central Office Re-architected as a Datacenter architecture to merge the three primary subscriber packages into one platform.

Formerly available as individual options, the CORD project announced that the 4.1 release combines the residential, mobile and enterprise packages on a common platform to streamline the building process, according to Timon Sloane, vice president of marketing and ecosystem at the Open Networking Foundation, which hosts the CORD project.

Users pick the type of profile they require — residential, mobile or enterprise — and the platform takes it from there.

“You click a box and everything else is automated; it just flows right through — it builds, deploys and boots, and the whole data center comes up and starts running,” Sloane said.

The platform also comes with a library of 25 virtual network functions (VNFs) and the needed management and orchestration. A short list includes virtual evolved packet core, virtual subscriber gateway and virtual network as a service. The mobile 5G and residential XGS-PON VNFs — the latter a new fiber transmission technology — are some of the more popular ones among subscribers, he said, reflecting the need to support edge computing — be it the cloud edge or the mobile edge.

“There are a lot of mobile [VNFs] since it’s a complicated space,” he said. “[There are] a lot of different pieces in connecting subscribers to mobile core and authenticating them all.”

CORD 4.1 also supports third-party VNFs, recognizing the need for some services to stay proprietary. As such, Sloane said CORD provides the open infrastructure that supports those proprietary options, a benefit for edge computing and 5G deployments. He attributed part of CORD’s momentum to the open source community, stating it can move more quickly than traditional standards bodies.

CORD has also started migrating VNFs that run on servers to a software-defined switch fabric that connects the CORD data center, he said.

“Obviously, infrastructure makes sense running on a server, but for the bulk of individual packets, you want them to flow through the switch fabric,” he said, touting the benefits of increased space and traffic speeds and lowered costs.

Sloane said the CORD project will focus on technologies like augmented and virtual reality, the internet of things and autonomous vehicles for its next release.

Juniper plans to move OpenContrail governance to The Linux Foundation

Juniper Networks this week announced it will be sharing its OpenContrail codebase with The Linux Foundation.

In 2013, Juniper open sourced its Contrail products, creating an open source community called OpenContrail. Since then, OpenContrail has been used as a network virtualization platform for cloud environments.

This week’s move will bring the network virtualization control plane under The Linux Foundation’s governance and development umbrella. The goal is to persuade more cloud providers and operators to consider using OpenContrail to anchor their networks, with hopes to further integrate OpenContrail into cloud ecosystems.

“Over the past year, we have been working closely with the community to transition the governance for OpenContrail’s codebase because we believe it has the unique opportunity to be a ubiquitous cloud-grade network fabric used everywhere,” said Randy Bias, Juniper’s vice president of technology for cloud software, in a company statement.

Masergy updates managed SD-WAN Go

Masergy added support for application performance and security to its SD-WAN Go offering.

The provider said the service — which uses technology from Silver Peak — now features more sophisticated application routing and automatic path control. For security, Masergy added an embedded firewall and router.

“Managed SD-WAN Go now gives businesses of any size additional enterprise-grade capabilities at a fraction of the cost of comparable solutions,” a company statement said. Masergy unveiled SD-WAN Go earlier this year, targeting the service to both small and large enterprises.

Masergy also offers SD-WAN Pro, tailored to enterprises with more complex networks.

Microsoft Azure preview with Azure Availability Zones now open in France

The preview of Microsoft Azure in France is open today to all customers, partners and ISVs worldwide giving them the opportunity to deploy services and test workloads in these latest Azure regions. This is an important step towards offering the Azure cloud platform from our datacenters in France.

The new Azure Regions in France are part of our global portfolio of 42 regions announced, which offer the scale needed to bring applications closer to users and customers around the world. We continue to prioritize geographic expansion of Azure to enable higher performance and availability, meet local regulatory requirements, and support customer preferences regarding data location. The new regions will offer the same enterprise-grade reliability and performance as our globally available services combined with data residency to support the digital transformation of businesses and organizations in France.

The new France Central region offers Azure Availability Zones which provide comprehensive native business continuity solutions and the highest availability in the industry with a 99.99% virtual machines uptime SLA when generally available. Availability Zones are fault-isolated locations within an Azure region, providing redundant power, cooling, and networking for higher availability, increased resiliency and business continuity. Starting with preview, customers can architect highly available applications and increase their resiliency to datacenter level failures by deploying IaaS resources across Availability Zones in France Central. Availability Zones in France Central can be paired with the geographically separated France South region for regional disaster recovery while maintaining data residency requirements.

You can follow these links to sign up for the Azure Preview in France, learn more about the Microsoft Cloud in France, or learn more about Azure Availability Zones.

For Sale – Open to offers – Lots of parts – i7 860, 8gb ddr3, Gigabyte 7970, XFX 5850, OCZ 600W, + more

EDIT: I’m open to offers

Hi there, I have quite a few items for sale. All the parts are in good working condition, I look after my stuff. Most items will have the original box and all / most additional stuff inside. There is no warranty for left for the items. Photos for all items are included. Please bear in mind that the parts are all used (apart from a few items), and although I have cleaned them as well as I could, there could be still some dust left.

I would like to sell as many parts together as possible, so at the moment I want to see if there any bulk offers.

Delivery is not included in prices. I need to see how big and heavy parcel will be to try to price it. Delivery costs are to be confirmed.

During working hours I might be slow to reply, please bear that in mind. Also dispatching can take a day or two as I need to organise boxes to ship items in.

I’m after a quick and painless sale, please no new members. I would rather deal with regulars here.

Reasonable offers welcome.

Let’s start with the bulk picture.

Items for sale:
2. Asus P7P55D Motherboard – £80
3. OCZ DDR3 PC3 10666 1333Mhz Platinum Series RAM (4 x 2GB) – £80

9. Cooler Master HAF 922 PC Case – £50
10. LG DVD-RW 22x SATA – £15
11. Soundblaster X-Fi Xtreme Gamer Sound Card – £30
13. Sumvision Modem – £10
16. Logitech Mouse M115 – £10
17. Logitech Ultra-flat Keyboard – £10
18. Cold Cathode PC Interior Light – £10

ITEMS SOLD:
4. Gigabyte HD7970 3GB OC Graphics CardSOLD
6. Thermalright Venomous X CPU Cooler HeatsinkSOLD
7. Akasa Apache Black 120mm FAN (2 items) x 2SOLD
8. OCZ 600W MOD Xstream PRO PSUSOLD
14. HAMA WLAN PCI CardSOLD
5. XFX Radeon HD5850 XXX Edition 1GB Graphics CardSOLD
15. Tech Air Laptop BackpackSOLD

1. Intel i7 860 2.8Ghz Quad Core CPUSOLD
12. ASUS Sica Gaming Mouse White – SOLD


Pictures to follow

1. Intel i7 860 2.8Ghz Quad Core CPU – SOLD
In retail box, original Intel cooler included. Fully working. Great little chip.

2. Asus P7P55D Motherboard – £80
Fully working. In retail box with all the accessories.

more to follow in the next posts

Price and currency: 650
Delivery: Delivery cost is not included
Payment method: Bank Transfer
Location: West Lothian
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

New ONAP architecture provides network automation platform

Eight months after its inception, the Open Network Automation Platform project released its first code, dubbed Amsterdam. The ONAP architecture is a combination of two former open source projects — AT&T’s Enhanced Control, Orchestration, Management and Policy and the Open-Orchestrator project.

ONAP’s November release targets carriers and service providers by creating a platform for network automation. It includes two blueprints — one for Voice over Long Term Evolution and one governing virtual customer premises equipment. Additionally, Amsterdam focuses on automating the service lifecycle management for virtual network functions (VNFs), said Arpit Joshipura, general manager of networking and orchestration at The Linux Foundation, which hosts the ONAP project.

The ONAP architecture includes three main components: design time, run time and the managed environment. Users can package VNFs according to their individual requirements, but Amsterdam also offers a VNF software developer kit (SDK) to incorporate third-party VNFs, Joshipura said.

Once services are live, the code — a combination of existing Enhanced Control, Orchestration, Management and Policy, or ECOMP, and Open-O with new software — can manage physical and virtual network functions, hypervisors, operating systems and cloud environments. The ONAP architecture integrates with existing operational and billing support systems through an external API framework.

VNF automation is a key component, Joshipura said.

“The network is constantly collecting data, analytics, events, security, scalability — all the things relevant to closed-loop automation — and then it feeds it [the data] back to the service lifecycle management,” he said. “If a VNF needs more VMs [virtual machines] or more memory or a change in priority or quality of service, all that is automatically done — no human touch required.”

Because ONAP is a collection of individual open source projects, some industry observers and potential users expressed doubts about how easy it would be to put Amsterdam to use — particularly since AT&T was originally the main ECOMP contributor. But Joshipura said ONAP reworked the code to reduce the complexity and make Amsterdam usable for the majority of users, not just specific contributors.

“Originally, yes, it was complex because it was a set of two monolithic codes. One was Open-O and the other was ECOMP,” he said. “Then, what we did was we decoupled and modularized it and we removed all the open source components. We refactored a lot of code when we added new code.”

The result is a modular platform — not a product, he said — that has many parts doing several different things. This modularity means carriers and service providers can pick and choose from the Amsterdam code or use the platform as a whole.

ONAP’s next release — Beijing, expected in 2018 — will focus on support for enterprise workloads, including 5G and internet of things (IoT).

MEF releases 3.0 framework aimed at automation, orchestration

MEF has released a new framework governing how service providers deploy network automation and orchestration.

MEF 3.0 Transformational Global Services Framework is the latest effort by the organization to move beyond its carrier Ethernet roots. MEF is shifting its focus toward creating a foundation that service provider members can use as they move toward cloud-native environments.

MEF 3.0 is developed around four main components: standardized and orchestrated services, open lifecycle service orchestration (LSO) APIs, certification programs and community collaboration.

With the new framework, MEF is defining network services, like wavelength, IP and security, to help service providers move to cloud environments and network automation, according to Pascal Menezes, CTO at MEF, based in Los Angeles.

“A service is defined like a language that everybody can understand, whether it be a subscriber ordering it or a provider implementing it. They all agree on that language,” he said. “But how they actually implement it and what technology they use is independent and was never really defined in any specs. It defines SLA objectives, performance objectives and different classes of performances, but it doesn’t tell you how to implement.”

MEF has previously worked on orchestrating connectivity services, like wavelength and IP, and intends to deliver that work early next year, Menezes said. MEF has started developing SD-WAN orchestration standards, as well, citing its role as a bridge between connectivity layer services and other services, like security as a service and application performance, he added.

These services are automated and orchestrated via MEF LSO APIs. MEF released two LSO APIs earlier this year and will continue to develop more within MEF’s LSO reference orchestration framework. The certification programs will correlate with upcoming releases and are subscription-based, he said.

The fourth MEF 3.0 component involves what MEF calls community collaboration. This involves open source contributions, an enterprise advisory council, hackathons and partnerships with other industry groups. MEF and ONAP, for example, announced earlier this year they are working together to standardize automation and orchestration for service provider networks.

In a separate announcement this week, MEF said it plans to combine its efforts to determine how cloud applications connect with networks with work conducted by the International Multimedia Telecommunications Consortium (IMTC) to define application performance. According to Menezes, MEF will integrate existing IMTC work into its new applications committee and will take over any active projects as part of the MEF 3.0 framework. 

“IMTC has been focused on real-time media application performance and interoperability. It made a lot of sense to bring that work into MEF,” Menezes said.

Join us for Skype-a-Thon: Microsoft’s global event on Nov. 28-29 aiming to unite nearly half a million students |

Our annual Skype-a-Thon is here again, connecting thousands of classrooms to help open hearts and open minds. In our increasingly complex world this could not be more important. The students of today represent our hope for a better tomorrow. They are building our future.

This annual event is a celebration of the power of connecting students to each other globally, and an opportunity to teach greater empathy and compassion for our planet and for each other. Through sharing stories and projects, playing games, and collaborating on similar subjects, students’ hearts and minds are opened, allowing them to become more engaged global citizens.

Skype-a-Thon is a 48-hour event in which we, as a global community, count the distance all students travel virtually during any Skype calls made from November 28th through November 29th.  Last year, thousands of classrooms participated across all seven continents. This year we’re setting a goal for our global community to travel over 10 million virtual miles, and aiming to connect nearly half a million students.

As someone fortunate enough to visit classrooms around the world, I’m always heartened to see today’s students learning about global citizenship. One of the best ways they are gaining this knowledge is by using video communications technologies, like Skype. Classrooms are opening their walls to connect with different cultures and environments that can offer different perspectives, and model compassion for each other, the environment, and the health and welfare of students and their neighbors, near and far.

The impact of these experiences is best realized through the voices of the students participating.

“The world seems really large, and it would take a long time to go all way around it. But with things like Skype it seems so small.” Quentin, 8th Grade

Classrooms also use Skype to connect virtually with guest speakers, in fields of study where local experts are not available. They also take virtual field trips to visit landmarks and places of interest.

We created the Microsoft Global Skype-a-Thon to shine a light on the value of these virtual experiences.  It’s been exciting to see thousands of teachers and students participate to celebrate this kind of teaching, and to learn about empathy and appreciation of others and our world.

How do you join and have your students Skype with other classrooms and professionals?

There are many ways to get started and engage with this year’s Skype-a-Thon.  You can Skype with a classroom in another country, play Mystery Skype or share traditions and stories, take a virtual field trip to a place of interest, or get great advice from an expert during a Skype call with a guest speaker.

Visit skypeathon.com to plan your experiences and decide how many virtual miles your classroom can travel over the 48-hour celebration of global learning!

[embedded content]

New to Skype in the Classroom? That’s ok! Here’s how to get started:

  1. Sign-up on the Skype-a-Thon page: skypeathon.com
  2. Once you are signed up on the Microsoft Educator community, you can schedule a Skype call for November 28-29. Here you’ll be able to find a Skype activity in which to participate with your class. It could be any of the following Skype Classroom activities:
  • Skype with another class
  • Skype with a Microsoftie
  • Skype with a guest speaker
  1. Once you have scheduled your Skype calls for your students, share your plans and goals and virtual destinations. During the event, please share your experience and miles traveled with us on social networks using #skypeathon and #MicrosoftEDU.

For even more ideas, check out this Sway with even more information on all the ways you can participate in this year’s Skype-a-Thon:

 

Plus: Don’t miss the TweetMeet on Nov. 21, 10:00AM PST, in which you can talk to other educators about how they’re prepping for Skype-a-Thon.

Please join us on Nov. 28 to make this Skype-a-Thon another exciting journey through open hearts and open minds. To learn more, visit skypeathon.com today.

Join the conversation on Twitter, Facebook and Instagram using:

#skypeathon

#MicrosoftEDU

For Sale – MSI GT72VR 6RE

Lol sent you conv without realising you haVE sale thread open already.

ready to offer as my daughter is at uni in Aston so its a great reason to go visit costco with her. The laptop is for her Xmas pressy and for getting AAB in her a levels.
where are we on £1000.00 cash collected sat and big smile from a proud dad. your own daughter will be there soon by looks of your avatar.
best regards

Mark V

For Sale – MSI GT72VR 6RE

Lol sent you conv without realising you haVE sale thread open already.

ready to offer as my daughter is at uni in Aston so its a great reason to go visit costco with her. The laptop is for her Xmas pressy and for getting AAB in her a levels.
where are we on £1000.00 cash collected sat and big smile from a proud dad. your own daughter will be there soon by looks of your avatar.
best regards

Mark V

For Sale – MSI GT72VR 6RE

Lol sent you conv without realising you haVE sale thread open already.

ready to offer as my daughter is at uni in Aston so its a great reason to go visit costco with her. The laptop is for her Xmas pressy and for getting AAB in her a levels.
where are we on £1000.00 cash collected sat and big smile from a proud dad. your own daughter will be there soon by looks of your avatar.
best regards

Mark V

Microsoft Azure cloud database activity takes off at Connect();

Microsoft plunged deeper into the open source milieu last week, as it expanded support for non-Microsoft software in its Azure cloud database lineup.

Among a host of developer-oriented updates discussed at the Microsoft Connect(); 2017 conference were new ties to the Apache Spark processing engine and Apache Cassandra, one of the top NoSQL databases. The company also added the MariaDB database to open source relational database services available on Azure that already include MySQL and PostgreSQL.

Taken together, the moves are part of an ongoing effort to fill in Microsoft’s cloud data management portfolio on the Azure platform, and to keep up with cloud computing market leader Amazon Web Services (AWS).

A database named MariaDB

Azure cloud database inclusion of MariaDB shows Microsoft’s “deep commitment to supporting data stores that might not necessarily be from Microsoft,” said consultant Ike Ellis, a Microsoft MVP and a partner at independent development house Crafting Bytes in San Diego, Calif.

Databricks CEO Ali Ghodsi at Microsoft Connect
Ali Ghodsi, CEO of Databricks, speaks at last week’s Microsoft Connect conference. Microsoft and Databricks have announced Azure Databricks, new services to expand the use of Spark on Azure.

Such support is important because MariaDB has gained attention in recent years, very much as an alternative to MySQL, which was the original poster child for open source relational databases.

MariaDB is a fork of MySQL, with development overseen primarily by Michael “Monty” Widenius, the MySQL creator who was vocally critical of Oracle’s stewardship of MySQL once it became a part of that company’s database lineup. In recent years, under the direction of Widenius and others, MariaDB has added thread pooling, parallel replication and various query optimizations. Widenius appeared via video at the Connect(); event, which took place in New York and was streamed online, to welcome Microsoft into the MariaDB fold.

Microsoft said it was readying a controlled beta of Azure Database for MariaDB. The company also said it was joining the MariaDB Foundation, the group that formally directs the database’s development.

“MariaDB has a lot of traction,” Ellis said. “Microsoft jumping into MariaDB is going to help its traction even more.”

Cassandra on the cloud

While MariaDB support expands SQL-style data development for Azure, newly announced Cassandra support broadens the NoSQL part of the Azure portfolio, which already included a Gremlin graph database API and a MongoDB API.

In the cloud world, you aren’t selling software; you are selling services.
David Chappellindependent consultant

Unlike MongoDB, which is document-oriented, Apache Cassandra is a key-value store.

Like MongoDB, Cassandra has found considerable use in web and cloud data operations that must quickly shuttle fast arriving data for processing.

Now in preview, Microsoft’s Cassandra API works with Azure Cosmos DB. This is a Swiss army knife-style database — sometimes described as a multimodel database — that the company spawned earlier this year from an offering known as DocumentDB. The Cassandra update fills in an important part of the Azure cloud database picture, according to Ellis.

“With the Cassandra API, Microsoft has hit everything you would want to hit in NoSQL stores,” he said.

Self-service Spark

Microsoft’s latest Spark move sees it working with Databricks, the startup formed by members of the original team that conceived the Spark data processing framework at University of California, Berkeley computer science labs.

These new Spark services stand as an alternative to Apache Spark software already offered as part of Microsoft’s HDInsight product line, which was created together with Hadoop distribution provider Hortonworks.

Known as Azure Databricks, the new services were jointly developed by Databricks and Microsoft and are being offered by Microsoft as a “first-party Azure service,” according to Ali Ghodsi, CEO of San Francisco-based Databricks. Central to the offering is native integration with Azure SQL Data Warehouse, Azure Storage, Azure Cosmos DB and Power BI, he said.

Azure Databricks joins a host of recent cloud-based services appearing across a variety of clouds, mostly intended to simplify self-service big data analytics and machine learning over both structured and unstructured data.

Ghodsi said Databricks’ Spark software has found use in credit card companies doing fraud analytics and in real-time life sciences firms combining large data sets, IoT and other applications.

Taking machine learning mainstream

The Microsoft-Databricks deal described at Connect(); is part of a continuing effort to broaden Azure’s use for machine learning and analytics. Earlier, at its Microsoft Ignite 2017 event, the company showed an Azure Machine Learning Workbench, an Azure Machine Learning Experimentation Service and an Azure Machine Learning Model Management service.

Viewers generally cede overall cloud leadership to AWS, but cloud-based machine learning has become a more competitive area of contention. It is a place where Microsoft may have passed Amazon, according to David Chappell, principal at Chappell and Associates in San Francisco, Calif.

“AWS has a simple environment that is for use by developers. But it is so simple that it is quite constrained,” he said. “It gives you few options.”

The audience for Microsoft’s Azure machine learning efforts, Chappell maintained, will be broader. It spans developers, data scientists and others. “Microsoft is really trying to take machine learning mainstream,” he said.

Economics in the cloud

Microsoft’s broadened open source support is led by this year’s launch of SQL Server on Linux. But that is only part of Microsoft’s newfound open source fervor.

“Some people are skeptical of Microsoft and its commitment to open source, that it is like lip service,” Chappell said. “What they don’t always understand is that cloud computing and its business models change the economics of open source software.

“In the cloud world, you aren’t selling software; you are selling services,” Chappell continued. “Whether it is open source or not, whether it is MariaDB, MySQL or SQL Server — that doesn’t matter, because you are charging customers based on usage of services.”

Azure data services updates are not necessarily based on any newfound altruism or open source evangelism, Chappell cautioned. It’s just, he said, the way things are done in the cloud.