Tag Archives: continues

Confluent Platform 5.0 aims to mainstream Kafka streaming

The Confluent Platform continues to expand on capabilities useful for Kafka-based data streaming, with additions that are part of a 5.0 release now available from Confluent Inc.

The creation of former LinkedIn data engineers who helped build the Kafka messaging framework, Confluent Platform’s goal is to make real-time big data analytics accessible to a wider community.

Part of that effort takes the form of KSQL, which is meant to bring easier SQL-style queries to analytics on Kafka data. KSQL is a Kafka-savvy SQL query engine and language Confluent created in 2017 to open Kafka streaming data to analytics.

Version 5.0 of the Confluent Platform, commercially released on July 31, seeks to improve disaster recovery with more adept handling of application client failover to enhance IoT abilities with MQTT proxy support, and to reduce the need to use Java for programming streaming analytics with a new GUI for writing KSQL code.

Data dips into mainstream

Confluent Platform 5.0’s support for disaster recovery and other improvements is useful, said Doug Henschen, a principal analyst at Constellation Research. But the bigger value in the release, he said, is in KSQL’s potential for “the mainstreaming of streaming analytics.”

Doug Henschen, Constellation ResearchDoug Henschen

Besides the new GUI, this Confluent release upgrades the KSQL engine with support for user-defined functions, which are essential parts of many existing SQL workloads. Also, the release supports handling nested data in popular Avro and JSON formats.

“With these moves Confluent is meeting developer expectations and delivering sought-after capabilities in the context of next-generation streaming applications,” Henschen said.

That’s important because web, cloud and IoT applications are creating data at a prodigious rate, and companies are looking to analyze that data as part of real-time operations. The programming skills required to do that level of development remain rare, but, as big data ecosystem software like Apache Spark and Kafka find wider use, simpler libraries and interfaces are appearing to link data streaming and analytics more easily.

Kafka, take a log

At its base, Kafka is a log-oriented publish-and-subscribe messaging system created to handle the data created by burgeoning web and cloud activity at social media giant LinkedIn.

The core software has been open sourced as Apache Kafka. Key Kafka messaging framework originators, including Jay Krebs, Neha Narkhede and others, left LinkedIn in 2014 to found Confluent, with the stated intent to build on core Kafka messaging for further enterprise purposes.

Joanna Schloss, Confluent’s director of product marketing, said Confluent Platform’s support for nested data in Avro and JSON will enable greater use of business intelligence (BI) tools in Kafka data streaming. In addition, KSQL now support more complex joins, allowing KSQL applications to enhance data in more varied ways.

Joanna Schloss, director of product marketing at ConfluentJoanna Schloss

She said opening KSQL activity to view via a GUI makes KSQL a full citizen in modern development teams in which programmers, as well as DevOps and operations staff, all take part in data streaming efforts.

“Among developers, DevOps and operations personnel there are persons interested in seeing how Kafka clusters are performing,” she said. Now, with the KSQL GUI, “when something arrives they can use SQL [skills] to watch what happened.” They don’t need to find a Java developer to interrogate the system, she noted.

Making Kafka more accessible for applications

KSQL is among the streaming analytics capabilities of interest to Stephane Maarek, CEO at DataCumulus, a Paris-based firm focused on Java, Scala and Kafka training and consulting.

Stephane Maarek, CEO of DataCumulusStephane Maarek

Maarek said KSQL has potential to encapsulate a lot of programming complexity, and, in turn, to lower the barrier to writing streaming applications. In this, Maarek said, Confluent is helping make Kafka more accessible “to a variety of use cases and data sources.”

Moreover, because the open source community that supports Kafka “is strong, the real-time applications are really easy to create and operate,” Maarek added.

Advances in the replication capabilities in Confluent Platform are “a leap forward for disaster recovery, which has to date been something of a pain point,” he said.

Maarek also said he welcomed recent updates to Confluent Control Center, because they give developers and administrators more insights into the activity of Kafka cluster components, particularly schema registry and application consumption lags — the difference between messaging reads and messaging writes. The updates also reduce the need for administrators to write commands, according to Maarek.

Data streaming field

The data streaming field remains young, and Confluent faces competition from established data analytics players like IBM, Teradata and SAS Institute, Hadoop distribution vendors like Cloudera, Hortonworks and MapR, and a variety of specialists such as MemSQL, SQLstream and Striim.

“There’s huge interest in streaming applications and near-real-time analytics, but it’s a green space,” Henschen said. “There are lots of ways to do it and lots of vendor camps — database, messaging-streaming platforms, next-gen data platforms and so on — all vying for a piece of the action.”

However, Kafka often is a common ingredient, Henschen noted. Such ubiquity helps put Confluent in a position “to extend the open source core with broader capabilities in a commercial offering,” he said.

Curious About Windows Server 2019? Here’s the Latest Features Added

Microsoft continues adding new features to Windows Server 2019 and cranking out new builds for Windows Server Insiders to test. Build 17709 has been announced, and I got my hands on a copy. I’ll show you a quick overview of the new features and then report my experiences.

If you’d like to get into the Insider program so that you can test out preview builds of Windows Server 2019 yourself, sign up on the Insiders page.

Ongoing Testing Requests

If you’re just now getting involved with the Windows Server Insider program or the previews for Windows Server 2019, Microsoft has asked all testers to try a couple of things with every new build:

  • In-place upgrade
  • Application compatibility

You can use virtual machines with checkpoints to easily test both of these. This time around, I used a physical machine, and my upgrade process went very badly. I have not been as diligent about testing applications, so I have nothing of importance to note on that front.

Build 17709 Feature 1: Improvements to Group Managed Service Accounts for Containers

I would bet that web applications are the primary use case for containers. Nothing else can match containers’ ability to strike a balance between providing version-specific dependencies while consuming minimal resources. However, containerizing a web application that depends on Active Directory authentication presents special challenges. Group Managed Service Accounts (gMSA) can solve those problems, but rarely without headaches. 17709 includes these improvements for gMSAs:

  • Using a single gMSA to secure multiple containers should produce fewer authentication errors
  • A gMSA no longer needs to have the same name as the system that host the container(s)
  • gMSAs should now work with Hyper-V isolated containers

I do not personally use enough containers to have meaningful experience with gMSA. I did not perform any testing on this enhancement.

Build 17709 Feature 2: A New Windows Server Container Image with Enhanced Capabilities

If you’ve been wanting to run something in a Windows Server container but none of the existing images meet your prerequisites, you might have struck gold in this release. Microsoft has created a new Windows Server container image with more components. I do not have a complete list of those components, but you can read what Lars Iwer has to say about it. He specifically mentions:

  • Proofing tools
  • Automated UI tests
  • DirectX

As I read that last item, I instantly wanted to know: “Does that mean GUI apps from within containers?” Well, according to the comments on the announcement, yes*. You just have to use “Session 0”. That means that if you RDP to the container host, you must use the /admin switch with MSTSC. Alternatively, you can use the physical console or an out-of-band console connection application.

Commentary on Windows Server 2019 Insider Preview Build 17709

So far, my experiences with the Windows Server 2019 preview releases have been fairly humdrum. They work as advertised, with the occasional minor glitch. This time, I spent more time than normal and hit several frustration points.

In-Place Upgrade to 17709

Ordinarily, I test preview upgrades in a virtual machine. Sure, I use checkpoints with the intent of reverting if something breaks. But, since I don’t do much in those virtual machines, they always work. So, I never encounter anything to report.

For 17709, I wanted to try out the container stuff, and I wanted to do it on hardware. So, I attempted an in-place upgrade of a physical host. It was disastrous.

Errors While Upgrading

First, I got a grammatically atrocious message that contained false information. I wish that I had saved it so I could share with others that might encounter it, but I must have accidentally my notes. the message started out with “Something happened” (it didn’t say what happened, of course), then asked me to look in an XML file for information. Two problems with that:

  1. I was using a Server Core installation. I realize that I am not authorized to speak on behalf of the world’s Windows administrators, but I bet no one will get at mad at me for saying, “No one in the world wants to read XML files on Server Core.”
  2. The installer didn’t even create the file.

I still have not decided which of those two things irritates me the most. Why in the world would anyone actively decide to build the upgrade tool to behave that way?

Problems While Trying to Figure Out the Error

Well, I’m fairly industrious, so I tried to figure out what was wrong. The installer did not create the XML file that it talked about, but it did create a file called “setuperr.log”. I didn’t keep the entire contents of that file either, but it contained only one line error-wise that seemed to have any information at all: “CallPidGenX: PidGenX function failed on this product key”. Do you know what that means? I don’t know what that means. Do you know what to do about it? I don’t know what to do about it. Is that error even related to my problem? I don’t even know that much.

I didn’t find any other traces or logs with error messages anywhere.

How I Fixed My Upgrade Problem

I began by plugging the error messages into Internet searches. I found only one hit with any useful information. The suggestions were largely useless. But, the guy managed to fix his own problem by removing the system from the domain. How in the world did he get from that error message to disjoining the domain? Guesswork, apparently. Well, I didn’t go quite that far.

My “fix”: remove the host from my Hyper-V cluster. The upgrade worked after that.

Why did I put the word “fix” in quotation marks? Because I can’t tell you that actually fixed the problem. Maybe it was just a coincidence. The upgrade’s error handling and messaging was so horrifically useless that without duplicating the whole thing, I cannot conclusively say that one action resulted in the other. “Correlation is not causation”, as the saying goes.

Feedback for In-Place Upgrades

At some point, I need to find a productive way to express this to Microsoft. But for now, I’m upset and frustrated at how that went. Sure, it only took you a few minutes to read what I had to say. It took much longer for me to retry, poke around, search, and prod at the thing until it worked, and I had no idea that it was ever going to work.

Sure, once the upgrade went through, everything was fine. I’m quite happy with the final product. But if I were even to start thinking about upgrading a production system and I thought that there was even a tiny chance that it would dump me out at the first light with some unintelligible gibberish to start a luck-of-the-draw scavenger hunt, then there is a zero percent chance that I would even attempt an upgrade. Microsoft says that they’re working to improve the in-place upgrade experience, but the evidence I saw led me to believe that they don’t take this seriously at all. XML files? XML files that don’t even get created? Error messages that would have set off 1980s-era grammar checkers? And don’t even mean anything? This is the upgrade experience that Microsoft is anxious to show off? No thanks.

Microsoft: the world wants legible, actionable error messages. The world does not want to go spelunking through log files for vague hints. That’s not just for an upgrade process either. It’s true for every product, every time.

The New Container Image

OK, let’s move on to some (more) positive things. Many of the things that you’ll see in this section have been blatantly stolen from Microsoft’s announcement.

Once my upgrade went through, I immediately started pulling down the new container image. I had a bit of difficulty with that, which Lars Iwer of Microsoft straightened out quickly. If you’re trying it out, you can get the latest image with the following:

Since Insider builds update frequently, you might want to ensure that you only get the build version that matches your host version (if you get a version mismatch, you’ll be forced to run the image under Hyper-V isolation). Lars Iwer provided the following script (stolen verbatim from the previously linked article, I did not write this or modify it):

Trying Out the New Container Image

I was able to easily start up a container and poke around a bit:

Testing out the new functionality was a bit tougher, though. It solves problems that I personally do not have. Searching the Internet for, “example apps that would run in a Windows Server container if Microsoft had included more components” didn’t find anything I could test with either (That was a joke; I didn’t really do that. As far as you know). So, I first wrote a little GUI .Net app in Visual Studio.

*Graphical Applications in the New Container Image

Session 0 does not seem to be able to show GUI apps from the new container image. If you skimmed up to this point and you’re about to tell me that GUI apps don’t show anything from Windows containers, this links back to the (*) text above. The comments section of the announcement article indicate that graphical apps in the new container will display on session 0 of the container host.

I don’t know if I did something wrong, but nothing that I did would show me a GUI from within the new container style. The app ran just fine — it shows up under Get-Process — but it never shows anything. It does exactly the same thing under microsoft/dotnet-framework in Hyper-V isolation mode, though. So, on that front, the only benefit that I could verify was that I did not need to run my .Net app in Hyper-V isolation mode or use a lot of complicated FROM nesting in my dockerfile. Still no GUI, though, and that was part of my goal.

DirectX Applications in the New Container Image

After failing to get my graphical .Net app to display, I next considered DirectX. I personally do not know how to write even a minimal DirectX app. But, I didn’t need to. Microsoft includes the very first DirectX-dependent app that I was ever able to successfully run: dxdiag.

Sadly, dxdiag would not display on session 0 from my container, either. Just as with my .Net app, it appeared in the local process list and docker top. But, no GUI that I could see.

However, dxdiag did run successfully, and would generate an output file:

Notes for anyone trying to duplicate the above:

  • I started this particular container with 
    docker run it mcr.microsoft.com/windowsinsider
  • DXDiag does not instantly create the output file. You have to wait a bit.

Thoughts on the New Container Image

I do wish that I had more experience with containers and the sorts of problems this new image addresses. Without that, I can’t say much more than, “Cool!” Sure, I didn’t personally get the graphical part to work, but a DirectX app from with a container? That’s a big deal.

Overall Thoughts on Windows Server 2019 Preview Build 17709

Outside of the new features, I noticed that they have corrected a few glitchy things from previous builds. I can change settings on network cards in the GUI now and I can type into the Start menu to get Cortana to search for things. You can definitely see changes in the polish and shine as we approach release.

As for the upgrade process, that needs lots of work. If a blocking condition exists, it needs to be caught in the pre-flight checks and show a clear error message. Failing partway into the process with random pseudo-English will extend distrust of upgrading Microsoft operating systems for another decade. Most established shops already have an “install-new-on-new-hardware-and-migrate” process. I certainly follow one. My experience with 17709 tells me that I need to stick with it.

I am excited to see the work being done on containers. I do not personally have any problems that this new image solves, but you can clearly see that customer feedback led directly to its creation. Whether I personally benefit or not, this is a good thing to see.

Overall, I am pleased with the progress and direction of Windows Server 2019. What about you? How do you feel about the latest features? Let me know in the comments below!

At OpenText Enterprise World, security and AI take center stage

OpenText continues to invest in AI and security, as the content services giant showcased where features from recent acquisitions fit into its existing product line at its OpenText Enterprise World user conference.

The latest Pipeline podcast recaps the news and developments from Toronto, including OpenText OT2, the company’s new hybrid cloud/on-premises enterprise information management platform. The new platform brings wanted flexibility while also addressing regulatory concerns with document storage.

“OT2 simplifies for our customers how they invest and make decisions in taking some of their on-premises workflows and [porting] them into a hybrid model or SaaS model into the cloud,” said Muhi Majzoub, OpenText executive vice president of engineering and IT.

Majzoub spoke at OpenText Enterprise World 2018, which also included further updates to how OpenText plans to integrate Guidance Software’s features into its endpoint security offerings following the Guidance’s September 2017 acquisition.

Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?
Alan Lepofskyprincipal analyst, Constellation Research

OpenText has a rich history of acquiring companies and using the inherited customer base as an additional revenue or maintenance stream, as content management workflows are often built over decades of complex legacy systems.

But it was clear at OpenText Enterprise World 2018 that the Guidance Software acquisition filled a security gap in OpenText’s offering. One of Guidance’s premier products, EnCase, seems to have useful applications for OpenText users, according to Lalith Subramanian, vice president of engineering for analytics, security and discovery at OpenText.

In addition, OpenText is expanding its reach to Amazon AWS, Microsoft Azure and Google Cloud, but it’s unclear if customers will prefer OpenText offerings to others on the market or if current customers will migrate to public clouds.

“It comes down to: Will customers want to use a general AI platform like Azure, Google, IBM or AWS?” said Alan Lepofsky, principal analyst for Constellation Research. “Will the native AI functionality from OpenText compare and keep up? What will be the draw for new customers?”

Chief data officer role: Searching for consensus

Big data continues to be a force for change. It plays a part in the ongoing drama of corporate innovation — in some measure, giving birth to the chief data officer role. But consensus on that role is far from set.

The 2018 Big Data Executive Survey of decision-makers at more than 50 blue-chip firms found 63.4% of respondents had a chief data officer (CDO). That is a big uptick since survey participants were asked the same question in 2012, when only 12% had a CDO. But this year’s survey, which was undertaken by business management consulting firm NewVantage Partners, disclosed that the background for a successful CDO varies from organization to organization, according to Randy Bean, CEO and founder of NewVantage, based in Boston.

For many, the CDO is likely to be an external change agent. For almost as many, the CDO may be a long-trusted company hand. The best CDO background could be that of a data scientist, line executive or, for that matter, a technology executive, according to Bean.

In a Q&A, Bean delved into the chief data role as he was preparing to lead a session on the topic at the annual MIT Chief Data Officer and Information Quality Symposium in Cambridge, Mass. A takeaway: Whatever it may be called, the chief data officer role is central to many attempts to gain business advantage from key emerging technologies. 

Do we have a consensus on the chief data officer role? What have been the drivers?

Randy Bean: One principal driver in the emergence of the chief data officer role has been the growth of data.

Randy Bean, CEO, NewVantage PartnersRandy Bean

For about a decade now, we have been into what has been characterized as the era of big data. Data continues to proliferate. But enterprises typically haven’t been organized around managing data as a business asset.

Additionally, there has been a greater threat posed to traditional incumbent organizations from agile data-driven competitors — the Amazons, the Googles, the Facebooks.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.

Another driver for the chief data officer role, you’ve noted, was the financial crisis of 2008.

Bean: Yes, the failures of the financial markets in 2008-2009, to a significant degree, were a data issue. Organizations couldn’t trace the lineage of the various financial products and services they offered. Out of that came an acute level of regulatory pressure to understand data in the context of systemic risk.

Banks were under pressure to identify a single person to regulators to address questions about data’s lineage and quality. As a result, banks took the lead in naming chief data officers. Now, we are into a third or fourth generation in some of these large banks in terms of how they view the mandate of that role.

Isn’t that type of regulatory driver somewhat spurred by the General Data Protection Regulation (GDPR), which recently went into effect? Also, for factors defining the CDO role, NewVantage Partners’ survey highlights concerns organizations have about being surpassed by younger, data-driven upstarts. What is going on there?

Bean: GDPR is just the latest of many previous manifestations of this. There have been the Dodd-Frank regulations, the various Basel reporting requirements and all the additional regulatory requirements that go along with classifying banks as ‘too large to fail.’

That is a defensive driver, as opposed to the offensive and innovation drivers that are behind the chief data officer role. On the offensive side, the chief data officer is about how your organization can be more data-driven, how you can change its culture and innovate. Still, as our recent survey finds, there is defensive aspect, even there. Increasingly, organizations perceive threat coming from all kinds of agile, data-driven competitors.

Organizations need to come to terms with how they think about data and, from an organization perspective, to try to come up with an organizational structure and decide who would be a point person for data-related initiatives. That could be the chief data officer.
Randy BeanCEO and founder, NewVantage

You have written that big data and AI are on a continuum. That may be worthwhile to emphasize, as so much attention turns to artificial intelligence these days.

Bean: A key point is that big data has really empowered artificial intelligence.

AI has been around for decades. One of the reasons why it hasn’t gained traction is, in its aspects as a learning mechanism, it requires large volumes of data. In the past, data was only available in subsets or samples or in very limited quantities, and the corresponding learning on the part of the AI was slow and constrained.

Now, with the massive proliferation of data and new sources — in addition to transactional information, you also now have sensor data, locational data, pictures, images and so on — that has led to the breakthrough in AI in recent years. Big data provides the data that is needed to train the AI learning algorithms.

So, it is pretty safe to say there is no meaningful artificial intelligence without good data — without an ample supply of big data.

And it seems to some of us, on this continuum, you still need human judgment.

Bean: I am a huge believer in the human element. Data can help provide a foundation for informed decision-making, but ultimately it’s the combination of human experience, human judgment and the data. If you don’t have good data, that can hamper your ability to come to the right conclusion. Just having the data doesn’t lead you to the answer.

One thing I’d say is, just because there are massive amounts of data, it hasn’t made individuals or companies any wiser in and of itself. It’s just one element that can be useful in decision-making, but you definitely need human judgment in that equation, as well.

For Sale – Z77 Sabretooth + 16GB DDR3 Memory

Hey Guys,

The clear out continues. I acquired these in a purchase about 3 months ago. Originally I had the dream of moving back to ATX and making a nice looking rig, but I am losing my man cave. Collection from NW London and or WC1X an option. Also willing to meet in Zone 1 somewhere or on my way back on the Met Line.

First up we have the highly coveted Asus Z77 Sabretooth board. This come boxed (little tatty), includes i/o shield and all sorted of accessories including PCI-E Socket blankers additional unused fans etc. The board is a little dusty but nothing that a can of compressed air wont fix.

Since purchase the board has been in a rig at work that I would just mess about with running various Linux distro’s. No issues what so ever. Just stripped down the machine and placed the CPU protector back in.

Now for price, these sell for silly money on ebay. I dont want the hassle of ebay and I would rather a member of the community grabs a good deal. This board is dying to go into a nice rig where it can be shown off!

Price: 110 GBP (Excluding Delivery)

Next up we have:

Corsair 16GB DDR3 (4 x 4GB set) Vengeance LP Black Kit 1600Mhz 9-9-9-24 Timings

Price: 70 GBP (Excluding Delivery)

Price and currency: 110, 70
Delivery: Delivery cost is not included
Payment method: BT, PP(Buyer Pays Fees
Location: WEMBLEY
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Synology DS416j NAS (4-bay) and Synology DS213j NAS (2 bay)

My rejig of my storage continues so I’ve got 2 Synology NAS units for sale. If you’re looking at this ad, you probably are fully aware of the great operating system Synology has and its ease of use, so I won’t bore you with that.

First up is a Synology DS416j 4-bay NAS. It’s boxed with PSU, LAN cable, screws etc. It’s in amazing condition – the only issue I can see is that it has a few micro scratches on the bezel – it has a a shiny gloss plastic bezel (the type which seem to pick out…

Synology DS416j NAS (4-bay) and Synology DS213j NAS (2 bay)

Apple MacBook 12″ 2016 Model. Inc USB C Hub

Apple MacBook 12″ 2016 Model. Space Grey

Post Crimbo clear-out continues. This is my wife’s laptop, that she has barely used, after acquiring it in an incentive at work in Oct 2017.

Spec:
Core M5
8GB Ram
512GB SSD

Fully boxed with charger and a Hootoo USB-C hub (SD Card reader, HDMI out, USB-A etc).

Condition:
I can’t see a single mark on it. Has been kept in a sort of frosted plastic case (included), but has been very lightly used, in any case.

Battery cycle count is just 23 . . ….

Apple MacBook 12″ 2016 Model. Inc USB C Hub

Apple MacBook 12″ 2016 Model. Inc USB C Hub

Apple MacBook 12″ 2016 Model. Space Grey

Post Crimbo clear-out continues. This is my wife’s laptop, that she has barely used, after acquiring it in an incentive at work in Oct 2017.

Spec:
Core M5
8GB Ram
512GB SSD

Fully boxed with charger and a Hootoo USB-C hub (SD Card reader, HDMI out, USB-A etc).

Condition:
I can’t see a single mark on it. Has been kept in a sort of frosted plastic case (included), but has been very lightly used, in any case.

Battery cycle count is just 23 . . ….

Apple MacBook 12″ 2016 Model. Inc USB C Hub

Apple MacBook 12″ 2016 Model. Inc USB C Hub

Apple MacBook 12″ 2016 Model. Space Grey

Post Crimbo clear-out continues. This is my wife’s laptop, that she has barely used, after acquiring it in an incentive at work in Oct 2017.

Spec:
Core M5
8GB Ram
512GB SSD

Fully boxed with charger and a Hootoo USB-C hub (SD Card reader, HDMI out, USB-A etc).

Condition:
I can’t see a single mark on it. Has been kept in a sort of frosted plastic case (included), but has been very lightly used, in any case.

Battery cycle count is just 23 . . ….

Apple MacBook 12″ 2016 Model. Inc USB C Hub