Tag Archives: machine

Kite intros code completion for JavaScript developers

Kite, a software development tools startup specializing in AI and machine learning, has added code-completion capabilities for JavaScript developers.

San Francisco-based Kite’s AI-powered code completion technology to JavaScript initially targeted Python developers. JavaScript is arguably the most popular programming language and Kite’s move should be a welcome addition for JavaScript developers, as the technology can predict the next string of code they will write and complete it automatically.

“The use of AI is definitely making low-code even lower-code for sure, and no-code even more possible,” said Ronald Schmelzer, an analyst at Cognilytica in Ellicott City, Md. “AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.”

Kite’s Line-of-Code Completions feature uses advanced machine learning models to cut some of the mundane tasks that programmers perform to build applications, such as setting up build processes, searching for code snippets on Google, cutting and pasting boilerplate code from Stack Overflow, and repeatedly solving the same error messages, said Adam Smith, founder and CEO of Kite, in an interview.

Kite’s JavaScript code completions are currently available in private beta and can suggest code a developer has previously used or tap into patterns found in open source code files, Smith said. The deep learning models used to inform the Kite knowledge base have been trained on more than 22 million open source JavaScript files, he said.

Kite aims to advance the code-completion art

Unlike other code completion capabilities, Kite features layers of filtering such that only the most relevant completion results are returned, rather than a long list of completions ranked by probability, Smith said. Moreover, Kite’s completions work in .js, .jsx and .vue files and the system processes code locally on the user’s computer, rather than sending code to a cloud server for processing.

Ronald Schmelzer, analyst, CognilyticaRonald Schmelzer

Kite’s engineers took their time training the tool on the evergrowing JavaScript ecosystem and its frameworks, APIs and design patterns, Smith said. Thus, Kite works with popular JavaScript libraries and frameworks like React, Vue, Angular and Node.js. The system analyzes open source projects on GitHub and applies that data to machine learning models trained to predict the next word or words of code as programmers write in real time. This smarter programming environment makes it possible for developers to focus on what’s unique about their application.

There are other tools that offer code completion capabilities, such as the IntelliCode feature in the Microsoft Visual Studio IDE. IntelliCode provides more primitive code completion than Kite, Smith claimed. IntelliCode is the next generation of Microsoft’s older IntelliSense code completion technology. IntelliCode will predict the next word of code based on basic models, while Kite’s tool uses richer, more advanced deep learning models trained to predict further ahead to whole lines, and even multiple lines of code, Smith said.

AI systems are really good at determining patterns, so you can think of them as really advanced wizard or templating systems that can try to determine what you’re trying to do and suggest code or blocks or elements to complete your code.
Ronald SchmelzerAnalyst, Cognilytica

Moreover, Kite focuses on code completion, and not code correction, because programming code has to be exactly correct. For example, if you send someone a text with autocorrect errors, the tone of the message may still come across properly. But if you mistype a single letter of code, a program will not run.

Still, AI-powered code completion “Is still definitely a work in progress and much remains to be done, but OutSystems and others are also looking at AI-enabling their suites to deliver faster and more complete solutions in the low-code space,” Schmelzer said.

In addition to the new JavaScript code completion technology, Kite also introduced Kite Pro, the company’s first paid offering of code completions for Python powered by deep learning. Kite Pro provides features such as documentation in the Kite Copilot, which offers documentation for more than 800 Python libraries.

Kite works as a plugin for all of the most popular code editors, including Atom, JetBrains’ PyCharm/IntelliJ/WebStorm, Spyder, Sublime Text 3, VS Code and Vim. The product is available on Mac, Windows and Linux.

The basic version of Kite is free; however, Kite Pro costs $16.60 per user, per month. Custom team pricing also is available for teams that contact the company directly, Smith said.

Go to Original Article
Author:

Oracle’s GraalVM finds its place in Java app ecosystem

One year after its initial release for production use, Oracle’s GraalVM universal virtual machine has found validation in the market, evidenced by industry-driven integrations with cloud-native development projects such as Quarkus, Micronaut, Helidon and Spring Boot.

GraalVM supports applications written in Java, JavaScript and other programming languages and execution modes. But it means different things to different people, said Bradley Shimmin, an analyst with Omdia in Longmeadow, Mass.

First, it’s a runtime that can support a wide array of non-Java languages such as JavaScript, Ruby, Python, R, WebAssembly and C/C++, he said. And it can do the same for Java Virtual Machine (JVM) languages as well, namely Java, Scala and Kotlin.

Secondly, GraalVM is a native code generator capable of doing things like ahead-of-time compiling — the act of compiling a higher-level programming language such as C or C++ into a native machine code so that the resulting binary file can execute natively.

“GraalVM is really quite a flexible ecosystem of capabilities,” Shimmin said. “For example, it can run on its own or be embedded as a part of the OpenJDK. In short, it allows Java developers to tackle some specific problems such as the need for fast app startup times, and it allows non-Java developers to enjoy some of the benefits of a JVM such as portability.”

GraalVM came out of Oracle Labs, which used to be Sun Labs. “Basically, it is the answer to the question, ‘What would it look like if we could write the Java native compiler in Java itself?'” said Cameron Purdy, former senior vice president of development at Oracle and current CEO of Xqiz.it, a stealth startup in Lexington, Mass., that is working to deliver a platform for building cloud-native applications.

“The hypothesis behind the Graal implementation is that a compiler built in Java would be more easily maintained over time, and eventually would be compiling itself or ‘bootstrapped’ in compiler parlance,” Purdy added.

The GraalVM project’s overall mission was to build a universal virtual machine that can run any programming language.

The big idea was that a compiler didn’t have to have built-in knowledge of the semantics of any of the supported languages. The common belief of VM architects had been that a language VM needed to understand those semantics in order to achieve optimal performance.

“GraalVM has disproved this notion by demonstrating that a multilingual VM with competitive performance is possible and that the best way to do it isn’t through a language-specific bytecode like Java or Microsoft CLR [Common Language Runtime],” said Eric Sedlar, vice president and technical director of Oracle Labs.

To achieve this, the team developed a new high-performance optimizing compiler and a language implementation framework that makes it possible to add new languages to the platform quickly, Sedlar said. The GraalVM compiler provides significant performance improvements for Java applications without any code changes, according to Sedlar. Embeddability is another goal. For example, GraalVM can be plugged into system components such as a database.

GraalVM joins broader ecosystem

One of the higher-profile integrations for GraalVM is with Red Hat’s Quarkus, a web application framework with related extensions for Java applications. In essence, Quarkus tailors applications for Oracle’s GraalVM and HotSpot compiler, which means that applications written in it can benefit from using GraalVM native image technology to achieve near instantaneous startup and significantly lower memory consumption compared to what one can expect from a typical Java application at runtime.

“GraalVM is interesting to me as it potentially speeds up Java execution and reduces the footprint – both of which are useful for modern Java applications running on the cloud or at the edge,” said Jeffrey Hammond, an analyst at Forrester Research. “In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.”

In particular, I’m watching the combination of Graal and Quarkus as together they look really fast and really small — just the kind of thing needed for microservices on Java running in a FaaS environment.
Jeffrey HammondAnalyst, Forrester

Jeffrey HammondJeffrey Hammond

Quarkus uses the open source, upstream GraalVM project and not the commercial products — Oracle GraalVM or Oracle GraalVM Enterprise Edition.

“Quarkus applications can either be run efficiently in JVM mode or compiled and optimized further to run in Native mode, ensuring developers have the best runtime environment for their particular application,” said Rich Sharples, senior director of product management at Red Hat.

Red Hat officials believe Quarkus will be an important technology for two of its most important constituents — developers who are choosing Kubernetes and OpenShift as their strategic application development and production platform and enterprise developers with deep roots in Java.

“That intersection is pretty huge and growing and represents a key target market for Red Hat and IBM,” Sharples said. “It represents organizations across all industries who are building out the next generation of business-critical applications that will provide those organizations with a competitive advantage.”

Go to Original Article
Author:

Biometrics firm fights monitoring overload with log analytics

Log analytics tools with machine learning capabilities have helped one biometrics startup keep pace with increasingly complex application monitoring as it embraces continuous deployment and microservices.

BioCatch sought a new log analytics tool in late 2017. At the time, the Tel Aviv, Israel, firm employed a handful of workers and had just refactored a monolithic Windows application into microservices written in Python. The refactored app, which captures biometric data on how end users interact with web and mobile interfaces for fraud detection, required careful monitoring to ensure it still worked properly. Almost immediately after it completed the refactoring, BioCatch found the process had tripled the number of logs it shipped to a self-managed Elasticsearch repository.

“In the beginning, we had almost nothing,” said Tamir Amram, operations group lead for BioCatch, of the company’s early logging habits. “And, then, we started [having to ship] everything.”

The team found it could no longer manage its own Elasticsearch back end as that log data grew. Its IT infrastructure also mushroomed into 10 Kubernetes clusters distributed globally on Microsoft Azure. Each cluster hosts multiple sets of 20 microservices that provide multi-tenant security for each of its customers.

At that point, BioCatch had a bigger problem. It had to not only collect, but also analyze all its log data to determine the root cause of application issues. This became too complex to do manually. BioCatch turned to log analytics vendor Coralogix as a potential answer to the problem.

Log analytics tools flourish under microservices

Coralogix, founded in 2015, initially built its log management system on top of a hosted Elasticsearch service but couldn’t generate enough interest from customers.

“It did not go well,” Coralogix CEO Ariel Assaraf recalled of those early years for the business. “It was early in log analytics’ and log management’s appeal to the mainstream, and customers already had ‘good enough’ solutions.”

While the company still hosts Elasticsearch for its customers, based on the Amazon Open Distro for Elasticsearch, it refocused on log analytics, developed machine learning algorithms and monitoring dashboards, and relaunched in 2017.

That year coincided with the emergence of containers and microservices in enterprise IT shops as they sought to refactor monolithic applications with new design patterns. The timing proved fortuitous; since the Coralogix’s relaunch in 2017, it has gained more than 1,200 paying customers, according to Assaraf, at an average deal size of $50,000 a year.

Coralogix isn’t alone among DevOps monitoring vendors reaping the spoils of demand for microservices monitoring tools — not just in log analytics, but AI- and machine learning-driven infrastructure management, or AIOps, as well. These include application performance management (APM) vendors, such as New Relic, Datadog, AppDynamics and Dynatrace, along with Coralogix log analytics competitors Elastic Inc. and Splunk.

We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs.
Tamir AmramOperations group lead, BioCatch

In fact, analyst firm 451 Research predicted that the market for Kubernetes monitoring tools will dwarf the market for Kubernetes management products by 2022 as IT pros move from the initial phases of deploying microservices into “day two” management problems. Even more recently, log analytics tools have begun to play an increasing role in IT security operations and DevSecOps.

The newly relaunched Coralogix caught the eye of BioCatch in part because of its partnership with the firm’s preferred cloud vendor, Microsoft Azure. It was also easy to set up and redirect logs from the firm’s existing Elasticsearch instance, and the Coralogix-managed Elasticsearch service eliminated log management overhead for the BioCatch team.

“We were able to delegate log management to the support team, so the DevOps team wasn’t the only one owning and using logs,” Amram said. “Now, more than half of the company works with Coralogix, and more than 80% of those who work with it use it on a daily basis.”

Log analytics correlate app changes to errors

The BioCatch DevOps team adds tags to each application update that direct log data into Coralogix. Then, the software monitors application releases as they’re rolled out in a canary model for multiple tiers of customers. BioCatch rolls out its first application updates to what it calls “ring zero,” a group of early adopters; next, to “ring one;” and so on, according to each customer group’s appetite for risk. All those changes to multiple tiers and groups of microservices result in an average of 1.5 TB of logs shipped per day.

The version tags fed through the CI/CD pipeline to Coralogix enable the tool to identify issues and correlate them with application changes made by BioCatch developers. It also identifies anomalous patterns in infrastructure behavior post-release, which can catch problems that don’t appear immediately.

Coralogix log analytics
Coralogix log analytics uses version tags to correlate application issues with specific developer changes.

“Every so often, an issue will appear a day later because we usually release at off-peak times,” BioCatch’s Amram said. “For example, it can say, ‘sending items to this queue is 20 times slower than usual,’ which shows the developer why the queue is filling up too quickly and saturating the system.”

BioCatch uses Coralogix alongside APM tools from Datadog that analyze application telemetry and metrics. Often, alerts in Datadog prompt BioCatch IT ops pros to consult Coralogix log analytics dashboards. Datadog also began offering log analytics in 2018 but didn’t include this feature when BioCatch first began talks with Coralogix.

Coralogix also maintains its place at BioCatch because its interfaces are easy to work with for all members of the IT team, Amram said. This has grown to include not only developers and IT ops, but solutions engineers who use the tool to demonstrate to prospective customers how the firm does troubleshooting to maintain its service-level agreements.

“We don’t have to search in Kibana [Elasticsearch’s visualization layer] and say, ‘give me all the errors,'” Amram said. “Coralogix recognizes patterns, and if the pattern breaks, we get an alert and can immediately react.”

Go to Original Article
Author:

For Sale – Mac Pro 2009 (4,1) – with Mojave

Spare machine to go in order to make space…

Mac Pro 2009 (4,1) with firmware updated as (5,1) compatible
– Xeon Quad-core – 2.66GHz (to be confirmed)
– 32GB RAM (4 sticks)
– 500GB SATA HDD, brackets in all bays
– Optical drive
– ATI 5770 (up to El Capitan)
– nVidia GT 630 (1GB) (Mojave)

Still outruns the ‘Darth Vader’ MacPro (2013). With a (SATA) SSD (not included), the machine flies!

GT630 can stay for El Capitan (hence both GPU cards). Working with macOS Mojave; reportedly compatible with Catalina (but not tested).

5770 must not be present for Mojava (macOS limitation)

No monitor/display. No box. Expect scuff marks on external casing.

£350 collected from my office in the Science Park (Milton Road)

Go to Original Article
Author:

For Sale – Custom loop water cooled pc – i9 9900k, 2080ti, 32gb 3200mhz ram, 2tb nvme

Selling as only seems to be my work machine rather than playing games and creating content as intended

Built by myself in November 2019, machine is only a few months old.

Only the best components were chosen When this was built.

Machine runs at 5ghz on all cores and gpu never sees above 50c.

Motherboard – ASus maximus Code

Cpu – intel i9 9900k with ek water block

Gpu – msi ventus oc 2080ti with ek water block and nickel backplate

Ram- 32gb g skill royal silver 3200mhz

Nvme – 1tb wd black

Nvme – 1tb sabrent

Psu – Corsair 750 modular

Ek nickel fittings

Ek d5 stand alone pump

Phanteks reservoir

6 Thermaltake ring plus fans with controllers

2 360mm x 45mm alphacool radiators

Thermaltake acrylic tubes and liquid

Custom cables

I am based in Tadworth Surrey and the machine can be seen and inspected in person.

Go to Original Article
Author:

Deploy and configure WSUS 2019 for Windows patching needs

Transcript – Deploy and configure WSUS 2019 for Windows patching needs

In this video, I want to show you how to deploy the Windows Server Update Services, or WSUS, in Windows Server 2019.

I’m logged into a Windows Server 2019 machine that is domain-joined. Open Server Manager and click on Manage, then go to Add Roles and Features to launch the wizard.

Click Next and choose the Role-based or feature-based installation option and click Next. Select your server from the server pool and click Next to choose the roles to install.

Scroll down and choose the Windows Server Update Services role, then click Add Features. There are no additional features needed, so click Next.

At the WSUS screen: If you need SQL Server connectivity, you can enable it here. I’m going to leave that checkbox empty and click Next.

I’m prompted to choose a location to store the updates that get downloaded. I’m going to store the updates in a folder that I created earlier called C:Updates. Click Next to go to the confirmation screen. Everything looks good here, so I’ll click Install.

After a few minutes, the installation process completes. Click Close.

The next thing that we need to do is to configure WSUS for use. Go to the notifications icon and click on that. We have some post-deployment configuration tasks that need to be performed, so click on Launch Post-Installation tasks. After a couple of minutes, the notification icon changes to a number. If I click on that, then we can see the post-deployment configuration was a success.

Close this out and click on Tools, and then click on Windows Server Update Services to open the console. Select the WSUS server and expand that to see we have a number of nodes underneath the server. One of the nodes is Options. Click on Options and then click on WSUS Server Configuration Wizard.

Click Next on the Before You Begin screen and then I’m taken to the Microsoft Update Improvement Program screen that asks if I want to join the program. Deselect that checkbox and click Next.

Next, we choose an upstream server. I can synchronize updates either from another Windows Server Update Services server or from Microsoft Update. This is the only WSUS server in my organization, so I’m going to synchronize from Microsoft Update, which is the default selection, and click Next.

I’m prompted to specify my proxy server. I don’t use a proxy server in my organization, so I’m going to leave that blank and click Next.

Click the Start Connecting button. It can take several minutes for WSUS to connect to the upstream update server, but the process is finally finished.

Now the wizard asks to choose a language. Since English is the only language spoken in my organization, I’m going to choose the option to download updates in English and click Next.

I’m asked which products I want to download updates for — I’m going to choose all products. I’ll go ahead and click Next.

Now I’m asked to choose the classifications that I want to download. In this case, I’m just going to go with the defaults [Critical Updates, Definition Updates, Security Updates and Upgrades]. I’ll click Next.

I’m prompted to choose a synchronization schedule. In a production organization, you’re probably going to want to synchronize automatically. I’m going to leave this set to synchronize manually. I’ll go ahead and click Next.

I’m taken to the Finished screen. At this point, we’re all done, aside from synchronizing updates, which can take quite a while to complete. If you’d like to start the initial synchronization process, now all you have to do is select the Begin Initial Synchronization checkbox and then click Next, followed by Finish.

That’s how you deploy and configure Windows Server Update Services.

+ Show Transcript

Go to Original Article
Author:

For Sale – £1700 – Alienware 17 R4 Laptop and Graphics Amp – i7, GTX1080, 16GB, 2x SSD, QHD 1440p 120Hz G-Sync

Selling my ‘beloved’ Alienware 17 R4.

Great machine, runs anything you can throw at it. Outstanding specification, including the screen.
Never overclocked. I have been the sole owner from new. Great condition, no damage etc., really looked after this.

Extras:

  • Alienware Graphics Amplifier (external GPU box) also included, empty, so you could upgrade the GPU if you wanted.
  • Alienware branded neoprene carry case and original box included.

Any questions, please ask.

Looking for: £1,800 now £1,700

Techradar review link here:

Due to the price and weight, I am looking for collection, and payment via bank transfer.

Specs below:
CPU: 2.9GHz Intel Core i7-7820HK (quad-core, 8MB cache, overclocking up to 4.4GHz)
Graphics: Nvidia GeForce GTX 1080 (8GB GDDR5X VRAM); Intel HD Graphics 630
RAM: 16GB DDR4 (2,400MHz)
Screen: 17.3-inch QHD (2,560 x 1,440), 120Hz, TN anti-glare at 400 nits; Nvidia G-Sync; Tobii eye-tracking
Storage: 512GB SSD (M.2 NVME), 1TB SSD WD Blue (M.2 SATA), 1TB HDD (7,200 RPM)
Ports: 1 x USB 3.0 port, 1 x USB-C port, 1 x USB-C Thunderbolt 3 port, HDMI 2.0, Mini-DisplayPort, Ethernet, Graphics Amplifier Port, headphone jack, microphone jack, Noble Lock
Connectivity: Killer 1435 802.11ac 2×2 Wi-Fi; Bluetooth 4.1
Camera: Alienware FHD camera with Tobii IR eye-tracking
Weight: 9.74 pounds (4.42kg)
Size: 16.7 x 13.1 x 1.18 inches (42.4 x 33.3 x 3cm; W x D x H)

Go to Original Article
Author:

AWS, NFL machine learning partnership looks at player safety

The NFL will use AWS’ AI and machine learning products and services to better simulate and predict player injuries, with the goal of ultimately improving player health and safety.

The new NFL machine learning and AWS partnership, announced during a press event Thursday with AWS CEO Andy Jassy and NFL Commissioner Roger Goodell at AWS re:Invent 2019, will change the game of football, Goodell said.

“It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game,” he said.

The NFL machine learning journey

The partnership builds off Next Gen Stats, an existing NFL and AWS agreement that has helped the NFL capture and process data on its players. That partnership, revealed back in 2017, introduced new sensors on player equipment and the football to capture real-time location, speed and acceleration data.

That data is then fed into AWS data analytics and machine learning tools to provide fans, broadcasters and NFL Clubs with live and on-screen stats and predictions, including expected catch rates and pass completion probabilities.

Taking data from that, as well as from other sources, including video feeds, equipment choice, playing surfaces, player injury information, play type, impact type and environmental factors, the new NFL machine learning and AWS partnership will create a digital twin of players.

AWS CEO Andy Jassy and NFL Commissioner Roger Goodell
AWS CEO Andy Jassy, left, and NFL Commissioner Roger Goodell announced a new AI and machine learning partnership at AWS re:Invent 2019.

The NFL began the project with a collection of different data sets from which to gather information, said Jeff Crandall, chairman of the NFL Engineering Committee, during the press event.

It wasn’t just passing data, but also “the equipment that players were wearing, the frequency of those impacts, the speeds the players were traveling, the angles that they hit one another,” he continued.

Typically used in manufacturing to predict machine outputs and potential breakdowns, a digital twin is essentially a complex virtual replica of a machine or person formed out of a host of real-time and historical data. Using machine learning and predictive analytics, a digital twin can be fed into countless virtual scenarios, enabling engineers and data scientists to see how its real-life counterpart would react.

The new AWS and NFL partnership will create digital athletes, or digital twins of a scalable sampling of players, that can be fed into infinite scenarios without risking the health and safety of real players. Data collected from these scenarios is expected to provide insights into changes to game rules, player equipment and other factors that could make football a safer game.

“For us, what we see the power here is to be able to take the data that we’ve created over the last decade or so” and use it, Goodell said. “I think the possibilities are enormous.”

Partnership’s latest move to enhance safety

It will be changing the way it’s played, it will [change] the way its coached, the way we prepare athletes for the game.
Roger GoodellCommissioner, NFL

New research in recent years has highlighted the extreme health risks of playing football. In 2017, researchers from the VA Boston Healthcare System and the Boston University School of Medicine published a study in the Journal of the American Medical Association that indicated football players are at a high risk for developing long-term neurological conditions.

The study, which did not include a control group, looked at the brains of high school, college and professional-level football players. Of the 111 NFL-level football players the researchers looked at, 110 of them had some form of degenerative brain disease.

The new partnership is just one of the changes the NFL has made over the last few years in an attempt to make football safer for its players. Other recent efforts include new helmet rules, and a recent $3 million challenge to create safer helmets.

The AWS and NFL partnership “really has a chance to transform player health and safety,” Jassy said.

AWS re:Invent, the annual flagship conference of AWS, was held this week in Las Vegas.

Go to Original Article
Author:

For Sale – Gaming Pc RTX 2080Ti, I7 9700, 32GB Dominator Platinum

What brand is the machine please?
Purchased new?
Purchased from?
Waranty remaining?
Optical drive blu ray?
Any bundled software?
Ancillaries or just PC unit?
Price paid?

thanks
Brian

Go to Original Article
Author: