Tag Archives: machine

Wanted – Decent GPU – £75 MAX Budget

Well my trusty AMD HD6870 is starting to show its age, so I’m looking for a decent (but not mega) upgrade.

I use my machine solely for racing SIM’s (Rfactor, Grid, Dirt, etc), not FPS, etc

I’m looking for a reasonable replacement.

Max budget is £75 delivered

Location
BRISTOL

Go to Original Article
Author:

SwiftStack 7 storage upgrade targets AI, machine learning use cases

SwiftStack turned its focus to artificial intelligence, machine learning and big data analytics with a major update to its object- and file-based storage and data management software.

The San Francisco software vendor’s roots lie in the storage, backup and archive of massive amounts of unstructured data on commodity servers running a commercially supported version of OpenStack Swift. But SwiftStack has steadily expanded its reach over the last eight years, and its 7.0 update takes aim at the new scale-out storage and data management architecture the company claims is necessary for AI, machine learning and analytics workloads.

SwiftStack said it worked with customers to design clusters that scale linearly to handle multiple petabytes of data and support throughput of more than 100 GB per second. That allows it to handle workloads such as autonomous vehicle applications that feed data into GPU-based servers.

Marc Staimer, president of Dragon Slayer Consulting, said throughput of 100 GB per second is “really fast” for any type of storage and “incredible” for an object-based system. He said the fastest NVMe system tests at 120 GB per second, but it can scale only to about a petabyte.

“It’s not big enough, and NVMe flash is extremely costly. That doesn’t fit the AI [or machine learning] market,” Staimer said.

This is the second object storage product launched this week with speed not normally associated with object storage. NetApp unveiled an all-flash StorageGrid array Tuesday at its Insight user conference.

Staimer said SwiftStack’s high-throughput “parallel object system” would put the company into competition with parallel file system vendors such as DataDirect Networks, IBM Spectrum Scale and Panasas, but at a much lower cost.

New ProxyFS Edge

SwiftStack 7 plans introduce a new ProxyFS Edge containerized software component next year to give remote applications a local file system mount for data, rather than having to connect through a network file serving protocol such as NFS or SMB. SwiftStack spent about 18 months creating a new API and software stack to extend its ProxyFS to the edge.

Founder and chief product officer Joe Arnold said SwiftStack wanted to utilize the scale-out nature of its storage back end and enable a high number of concurrent connections to go in and out of the system to send data. ProxyFS Edge will allow each cluster node to be relatively stateless and cache data at the edge to minimize latency and improve performance.

SwiftStack 7 will also add 1space File Connector software in November to enable customers that build applications using the S3 or OpenStack Swift object API to access data in their existing file systems. The new File Connector is an extension to the 1space technology that SwiftStack introduced in 2018 to ease data access, migration and searches across public and private clouds. Customers will be able to apply 1space policies to file data to move and protect it.

Arnold said the 1space File Connector could be especially helpful for media companies and customers building software-as-a-service applications that are transitioning from NAS systems to object-based storage.

“Most sources of data produce files today and the ability to store files in object storage, with its greater scalability and cost value, makes the [product] more valuable,” said Randy Kerns, a senior strategist and analyst at Evaluator Group.

Kerns added that SwiftStack’s focus on the developing AI area is a good move. “They have been associated with OpenStack, and that is not perceived to be a positive and colors its use in larger enterprise markets,” he said.

AI architecture

A new SwiftStack AI architecture white paper offers guidance to customers building out systems that use popular AI, machine learning and deep learning frameworks, GPU servers, 100 Gigabit Ethernet networking, and SwiftStack storage software.

“They’ve had a fair amount of success partnering with Nvidia on a lot of the machine learning projects, and their software has always been pretty good at performance — almost like a best-kept secret — especially at scale, with parallel I/O,” said George Crump, president and founder of Storage Switzerland. “The ability to ratchet performance up another level and get the 100 GBs of bandwidth at scale fits perfectly into the machine learning model where you’ve got a lot of nodes and you’re trying to drive a lot of data to the GPUs.”

SwiftStack noted distinct differences between the architectural approaches that customers take with archive use cases versus newer AI or machine learning workloads. An archive customer might use 4U or 5U servers, each equipped with 60 to 90 drives, and 10 Gigabit Ethernet networking. By contrast, one machine learning client clustered a larger number of lower horsepower 1U servers, each with fewer drives and a 100 Gigabit Ethernet network interface card, for high bandwidth, he said.

An optional new SwiftStack Professional Remote Operations (PRO) paid service is now available to help customers monitor and manage SwiftStack production clusters. SwiftStack PRO combines software and professional services.

Go to Original Article
Author:

For Sale – HP EliteBook 1030 G3 | i7-8550u, 16gb, 120hz, Touch – Convertible, 256gb, 4G LTE

Selling a brand new, top notch laptop.
It’s a brilliant machine but unfortunately I do miss the trackpoint of Thinkpads.

Specs:

  • i7-8650U, 16GB RAM, 256GB NVMe SSD, 13″ SureView 1080p IPS Touchscreen 120Hz, Harman Kardon speakers, 4G LTE
  • Brand new condition, 3 years warranty, no original box. *
  • £750
Location
London
Price and currency
750
Delivery
Prefer collection
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Advertised elsewhere
Payment method
Paypal / Bank transfer

Last edited:

Go to Original Article
Author:

Azure Sentinel—the cloud-native SIEM that empowers defenders is now generally available

Machine learning enhanced with artificial intelligence (AI) holds great promise in addressing many of the global cyber challenges we see today. They give our cyber defenders the ability to identify, detect, and block malware, almost instantaneously. And together they give security admins the ability to deconflict tasks, separating the signal from the noise, allowing them to prioritize the most critical tasks. It is why today, I’m pleased to announce that Azure Sentinel, a cloud-native SIEM that provides intelligent security analytics at cloud scale for enterprises of all sizes and workloads, is now generally available.

Our goal has remained the same since we first launched Microsoft Azure Sentinel in February: empower security operations teams to help enhance the security posture of our customers. Traditional Security Information and Event Management (SIEM) solutions have not kept pace with the digital changes. I commonly hear from customers that they’re spending more time with deployment and maintenance of SIEM solutions, which leaves them unable to properly handle the volume of data or the agility of adversaries.

Recent research tells us that 70 percent of organizations continue to anchor their security analytics and operations with SIEM systems,1 and 82 percent are committed to moving large volumes of applications and workloads to the public cloud.2 Security analytics and operations technologies must lean in and help security analysts deal with the complexity, pace, and scale of their responsibilities. To accomplish this, 65 percent of organizations are leveraging new technologies for process automation/orchestration, while 51 percent are adopting security analytics tools featuring machine learning algorithms.3 This is exactly why we developed Azure Sentinel—an SIEM re-invented in the cloud to address the modern challenges of security analytics.

Learning together

When we kicked off the public preview for Azure Sentinel, we were excited to learn and gain insight into the unique ways Azure Sentinel was helping organizations and defenders on a daily basis. We worked with our partners all along the way; listening, learning, and fine-tuning as we went. With feedback from 12,000 customers and more than two petabytes of data analysis, we were able to examine and dive deep into a large, complex, and diverse set of data. All of which had one thing in common: a need to empower their defenders to be more nimble and efficient when it comes to cybersecurity.

Our work with RapidDeploy offers one compelling example of how Azure Sentinel is accomplishing this complex task. RapidDeploy creates cloud-based dispatch systems that help first responders act quickly to protect the public. There’s a lot at stake, and the company’s cloud-native platform must be secure against an array of serious cyberthreats. So when RapidDeploy implemented a SIEM system, it chose Azure Sentinel, one of the world’s first cloud-native SIEMs.

Microsoft recently sat down with Alex Kreilein, Chief Information Security Officer at RapidDeploy. Here’s what he shared: “We build a platform that helps save lives. It does that by reducing incident response times and improving first responder safety by increasing their situational awareness.”

Now RapidDeploy uses the complete visibility, automated responses, fast deployment, and low total cost of ownership in Azure Sentinel to help it safeguard public safety systems. “With many SIEMs, deployment can take months,” says Kreilein. “Deploying Azure Sentinel took us minutes—we just clicked the deployment button and we were done.”

Learn even more about our work with RapidDeploy by checking out the full story.

Another great example of a company finding results with Azure Sentinel is ASOS. As one of the world’s largest online fashion retailers, ASOS knows they’re a prime target for cybercrime. The company has a large security function spread across five teams and two sites—but in the past, it was difficult for ASOS to gain a comprehensive view of cyberthreat activity. Now, using Azure Sentinel, ASOS has created a bird’s-eye view of everything it needs to spot threats early, allowing it to proactively safeguard its business and its customers. And as a result, it has cut issue resolution times in half.

“There are a lot of threats out there,” says Stuart Gregg, Cyber Security Operations Lead at ASOS. “You’ve got insider threats, account compromise, threats to our website and customer data, even physical security threats. We’re constantly trying to defend ourselves and be more proactive in everything we do.”

Already using a range of Azure services, ASOS identified Azure Sentinel as a platform that could help it quickly and easily unite its data. This includes security data from Azure Security Center and Azure Active Directory (Azure AD), along with data from Microsoft 365. The result is a comprehensive view of its entire threat landscape.

“We found Azure Sentinel easy to set up, and now we don’t have to move data across separate systems,” says Gregg. “We can literally click a few buttons and all our security solutions feed data into Azure Sentinel.”

Learn more about how ASOS has benefitted from Azure Sentinel.

RapidDeploy and ASOS are just two examples of how Azure Sentinel is helping businesses process data and telemetry into actionable security alerts for investigation and response. We have an active GitHub community of preview participants, partners, and even Microsoft’s own security experts who are sharing new connectors, detections, hunting queries, and automation playbooks.

With these design partners, we’ve continued our innovation in Azure Sentinel. It starts from the ability to connect to any data source, whether in Azure or on-premises or even other clouds. We continue to add new connectors to different sources and more machine learning-based detections. Azure Sentinel will also integrate with Azure Lighthouse service, which will enable service providers and enterprise customers with the ability to view Azure Sentinel instances across different tenants in Azure.

Secure your organization

Now that Azure Sentinel has moved out of public preview and is generally available, there’s never been a better time to see how it can help your business. Traditional on-premises SIEMs require a combination of infrastructure costs and software costs, all paired with annual commitments or inflexible contracts. We are removing those pain points, since Azure Sentinel is a cost-effective, cloud-native SIEM with predictable billing and flexible commitments.

Infrastructure costs are reduced since you automatically scale resources as you need, and you only pay for what you use. Or you can save up to 60 percent compared to pay-as-you-go pricing by taking advantage of capacity reservation tiers. You receive predictable monthly bills and the flexibility to change capacity tier commitments every 31 days. On top of that, bringing in data from Office 365 audit logs, Azure activity logs and alerts from Microsoft Threat Protection solutions doesn’t require any additional payments.

Please join me for the Azure Security Expert Series where we will focus on Azure Sentinel on Thursday, September 26, 2019, 10–11 AM Pacific Time. You’ll learn more about these innovations and see real use cases on how Azure Sentinel helped detect previously undiscovered threats. We’ll also discuss how Accenture and RapidDeploy are using Azure Sentinel to empower their security operations team.

Get started today with Azure Sentinel!

1 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
2 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019
3 Source: ESG Research Survey, Security Analytics and Operations: Industry Trends in the Era of Cloud Computing, September 2019

Go to Original Article
Author: Microsoft News Center

IT pros look to VMware’s GPU acceleration projects to kick-start AI

SAN FRANCISCO — IT pros who need to support emerging AI and machine learning workloads see promise in a pair of developments VMware previewed this week to bolster support for GPU-accelerated computing in vSphere.

GPUs are uniquely suited to handle the massive processing demands of AI and machine learning workloads, and chipmakers like Nvidia Corp. are now developing and promoting GPUs specifically designed for this purpose.

A previous partnership with Nvidia introduced capabilities that allowed VMware customers to assign GPUs to VMs, but not more than one GPU per VM. The latest development, which Nvidia calls its Virtual Compute Server, allows customers to assign multiple virtual GPUs to a VM.

Nvidia’s Virtual Compute Server also works with VMware’s vMotion capability, allowing IT pros to live migrate a GPU-accelerated VM to another physical host. The companies have also extended this partnership to VMware Cloud on AWS, allowing customers to access Amazon Elastic Compute Cloud bare-metal instances with Nvidia T4 GPUs.

VMware gave the Nvidia partnership prime time this week at VMworld 2019, playing a prerecorded video of Nvidia CEO Jensen Huang talking up the companies’ combined efforts during Monday’s general session. However, another GPU acceleration project also caught the eye of some IT pros who came to learn more about VMware’s recent acquisition of Bitfusion.io Inc.

VMware acquired Bitfusion earlier this year and announced its intent to integrate the startup’s GPU virtualization capabilities into vSphere. Bitfusion’s FlexDirect connects GPU-accelerated servers over the network and provides the ability to assign GPUs to workloads in real time. The company compares its GPU vitalization approach to network-attached storage because it disaggregates GPU resources and makes them accessible to any server on the network as a pool of resources.

The software’s unique approach also allows customers to assign just portions of a GPU to different workloads. For example, an IT pro might assign 50% of a GPU’s capacity to one VM and 50% to another VM. This approach can allow companies to more efficiently use its investments in expensive GPU hardware, company executives said. FlexDirect also offers extensions to support field-programmable gate arrays and application-specific integrated circuits.

“I was really happy to see they’re doing this at the network level,” said Kevin Wilcox, principal virtualization architect at Fiserv, a financial services company. “We’ve struggled with figuring out how to handle the power and cooling requirements for GPUs. This looks like it’ll allow us to place to our GPUs in a segmented section of our data center that can handle those power and cooling needs.”

AI demand surging

Many companies are only beginning to research and invest in AI capabilities, but interest is growing rapidly, said Gartner analyst Chirag Dekate.

“By end of this year, we anticipate that one in two organizations will have some sort of AI initiative, either in the [proof-of-concept] stage or the deployed stage,” Dekate said.

In many cases, IT operations professionals are being asked to move quickly on a variety of AI-focused projects, a trend echoed by multiple VMworld attendees this week.

“We’re just starting with AI, and looking at GPUs as an accelerator,” said Martin Lafontaine, a systems architect at Netgovern, a software company that helps customers comply with data locality compliance laws.

“When they get a subpoena and have to prove where [their data is located], our solution uses machine learning to find that data. We’re starting to look at what we can do with GPUs,” Lafontaine said.

Is GPU virtualization the answer?

Recent efforts to virtualize GPU resources could open the door to broader use of GPUs for AI workloads, but potential customers should pay close attention to benchmark testing, compared to bare-metal deployments, in the coming years, Gartner’s Dekate said.

So far, he has not encountered a customer using these GPU virtualization tactics for deep learning workloads at scale. Today, most organizations still run these deep learning workloads on bare-metal hardware.

 “The future of this technology that Bitfusion is bringing will be decided by the kind of overheads imposed on the workloads,” Dekate said, referring to the additional compute cycles often required to implement a virtualization layer. “The deep learning workloads we have run into are extremely compute-bound and memory-intensive, and in our prior experience, what we’ve seen is that any kind of virtualization tends to impose overheads. … If the overheads are within acceptable parameters, then this technology could very well be applied to AI.”

Go to Original Article
Author:

Why You Should Be Using VM Notes in PowerShell

One of the nicer Hyper-V features is the ability to maintain notes for each virtual machine. Most of my VMs are for testing and I’m the only one that accesses them so I often will record items like an admin password or when the VM was last updated. Of course, you would never store passwords in a production environment but you might like to record when a VM was last modified and by whom. For single VM management, it isn’t that big a deal to use the Hyper-V manager. But when it comes to managing notes for multiple VMs PowerShell is a better solution.

In this post, we’ll show you how to manage VM Notes with PowerShell and I think you’ll get the answer to why you should be using VM Notes as well. Let’s take a look.

Using Set-VM

The Hyper-V module includes a command called Set-VM which has a parameter that allows you to set a note.

Displaying a Hyper-V VM note

As you can see, it works just fine. Even at scale.

Setting notes on multiple VMs

But there are some limitations. First off, there is no way to append to existing notes. You could get any existing notes and through PowerShell script, create a new value and then use Set-VM. To clear a note you can run Set-VM and use a value of “” for -Notes. That’s not exactly intuitive. I decided to find a better way.

Diving Deep into WMI

Hyper-V stores much in WMI (Windows Management Instrumentation). You’ll notice that many of the Hyper-V cmdlets have parameters for Cimsessions. But you can also dive into these classes which are in the root/virtualization/v2 namespace. Many of the classes are prefixed with msvm_.

Getting Hyper-V CIM Classes with PowerShell

After a bit of research and digging around in these classes I learned that to update a virtual machine’s settings, you need to get an instance of msvm_VirtualSystemSettingData, update it and then invoke the ModifySystemSettings() method of the msvm_VirtualSystemManagementService class. Normally, I would do all of this with the CIM cmdlets like Get-CimInstance and Invoke-CimMethod. If I already have a CIMSession to a remote Hyper-V host why not re-use it?

But there was a challenge. The ModifySystemSettings() method needs a parameter – basically a text version of the msvm_VirtualSystemSettingsData object. However, the text needs to be in a specific format. WMI has a way to format the text which you’ll see in a moment. Unfortunately, there is no technique using the CIM cmdlets to format the text. Whatever Set-VM is doing under the hood is above my pay grade. Let me walk you through this using Get-WmiObject.

First, I need to get the settings data for a given virtual machine.

This object has all of the virtual machine settings.

I can easily assign a new value to the Notes property.

$data.notes = “Last updated $(Get-Date) by $env:USERNAME”

At this point, I’m not doing much else than what Set-VM does. But if I wanted to append, I could get the existing note, add my new value and set a new value.

At this point, I need to turn this into the proper text format. This is the part that I can’t do with the CIM cmdlets.

To commit I need the system management service object.

I need to invoke the ModifySystemSettings() method which requires a little fancy PowerShell work.

Invoking the WMI method with PowerShell

A return value of 0 indicates success.

Verifying the change

The Network Matters

It isn’t especially difficult to wrap these steps into a PowerShell function. But here’s the challenge. Using Get-WmiObject with a remote server relies on legacy networking protocols. This is why Get-CimInstance is preferred and Get-WmiObject should be considered deprecated. So what to do? The answer is to run the WMI commands over a PowerShell remoting session. This means I can create a PSSession to the remote server using something like Invoke-Command. The connection will use WSMan and all the features of PowerShell remoting. In this session on the remote machine, I can run all the WMI commands I want. There’s no network connection required because it is local.

The end result is that I get the best of both worlds – WMI commands doing what I need over a PowerShell remoting session. By now, this might seem a bit daunting. Don’t worry. I made it easy.

Set-VMNote

In my new PSHyperVTools module, I added a command called Set-VMNote that does everything I’ve talked about. You can install the module from the PowerShell Gallery. If you are interested in the sausage-making, you can view the source code on Github at https://github.com/jdhitsolutions/PSHyperV/blob/master/functions/public.ps1. The function should make it easier to manage notes and supports alternate credentials.

Set-VMNote help

Now I can create new notes.

Creating new notes

Or easily append.

Appending notes

It might be hard to tell from this. Here’s what it looks like in the Hyper-V manager.

Verifying the notes

Most of the time the Hyper-V PowerShell cmdlets work just fine and meet my needs. But if they don’t, that’s a great thing about PowerShell – you can just create your own solution! And as you can probably guess, I will continue to create and share my own solutions right here.

Go to Original Article
Author: Jeffery Hicks

New machine learning model sifts through the good to unearth the bad in evasive malware – Microsoft Security

We continuously harden machine learning protections against evasion and adversarial attacks. One of the latest innovations in our protection technology is the addition of a class of hardened malware detection machine learning models called monotonic models to Microsoft Defender ATP‘s Antivirus.

Historically, detection evasion has followed a common pattern: attackers would build new versions of their malware and test them offline against antivirus solutions. They’d keep making adjustments until the malware can evade antivirus products. Attackers then carry out their campaign knowing that the malware won’t initially be blocked by AV solutions, which are then forced to catch up by adding detections for the malware. In the cybercriminal underground, antivirus evasion services are available to make this process easier for attackers.

Microsoft Defender ATP’s Antivirus has significantly advanced in becoming resistant to attacker tactics like this. A sizeable portion of the protection we deliver are powered by machine learning models hosted in the cloud. The cloud protection service breaks attackers’ ability to test and adapt to our defenses in an offline environment, because attackers must either forgo testing, or test against our defenses in the cloud, where we can observe them and react even before they begin.

Hardening our defenses against adversarial attacks doesn’t end there. In this blog we’ll discuss a new class of cloud-based ML models that further harden our protections against detection evasion.

Most machine learning models are trained on a mix of malicious and clean features. Attackers routinely try to throw these models off balance by stuffing clean features into malware.

Monotonic models are resistant against adversarial attacks because they are trained differently: they only look for malicious features. The magic is this: Attackers can’t evade a monotonic model by adding clean features. To evade a monotonic model, an attacker would have to remove malicious features.

Monotonic models explained

Last summer, researchers from UC Berkeley (Incer, Inigo, et al, “Adversarially robust malware detection using monotonic classification”, Proceedings of the Fourth ACM International Workshop on Security and Privacy Analytics, ACM, 2018) proposed applying a technique of adding monotonic constraints to malware detection machine learning models to make models robust against adversaries. Simply put, the said technique only allows the machine learning model to leverage malicious features when considering a file – it’s not allowed to use any clean features.

Figure 1. Features used by a baseline versus a monotonic constrained logistic regression classifier. The monotonic classifier does not use cleanly-weighted features so that it’s more robust to adversaries.

Inspired by the academic research, we deployed our first monotonic logistic regression models to Microsoft Defender ATP cloud protection service in late 2018. Since then, they’ve played an important part in protecting against attacks.

Figure 2 below illustrates the production performance of the monotonic classifiers versus the baseline unconstrained model. Monotonic-constrained models expectedly have lower outcome in detecting malware overall compared to classic models. However, they can detect malware attacks that otherwise would have been missed because of clean features.

Figure 2. Malware detection machine learning classifiers comparing the unconstrained baseline classifier versus the monotonic constrained classifier in customer protection.

The monotonic classifiers don’t replace baseline classifiers; they run in addition to the baseline and add additional protection. We combine all our classifiers using stacked classifier ensembles–monotonic classifiers add significant value because of the unique classification they provide.

How Microsoft Defender ATP uses monotonic models to stop adversarial attacks

One common way for attackers to add clean features to malware is to digitally code-sign malware with trusted certificates. Malware families like ShadowHammer, Kovter, and Balamid are known to abuse certificates to evade detection. In many of these cases, the attackers impersonate legitimate registered businesses to defraud certificate authorities into issuing them trusted code-signing certificates.

LockerGoga, a strain of ransomware that’s known for being used in targeted attacks, is another example of malware that uses digital certificates. LockerGoga emerged in early 2019 and has been used by attackers in high-profile campaigns that targeted organizations in the industrial sector. Once attackers are able breach a target network, they use LockerGoga to encrypt enterprise data en masse and demand ransom.

Figure 3. LockerGoga variant digitally code-signed with a trusted CA

When Microsoft Defender ATP encounters a new threat like LockerGoga, the client sends a featurized description of the file to the cloud protection service for real-time classification. An array of machine learning classifiers processes the features describing the content, including whether attackers had digitally code-signed the malware with a trusted code-signing certificate that chains to a trusted CA. By ignoring certificates and other clean features, monotonic models in Microsoft Defender ATP can correctly identify attacks that otherwise would have slipped through defenses.

Very recently, researchers demonstrated an adversarial attack that appends a large volume of clean strings from a computer game executable to several well-known malware and credential dumping tools – essentially adding clean features to the malicious files – to evade detection. The researchers showed how this technique can successfully impact machine learning prediction scores so that the malware files are not classified as malware. The monotonic model hardening that we’ve deployed in Microsoft Defender ATP is key to preventing this type of attack, because, for a monotonic classifier, adding features to a file can only increase the malicious score.

Given how they significantly harden defenses, monotonic models are now standard components of machine learning protections in Microsoft Defender ATP‘s Antivirus. One of our monotonic models uniquely blocks malware on an average of 200,000 distinct devices every month. We now have three different monotonic classifiers deployed, protecting against different attack scenarios.

Monotonic models are just the latest enhancements to Microsoft Defender ATP’s Antivirus. We continue to evolve machine learning-based protections to be more resilient to adversarial attacks. More effective protections against malware and other threats on endpoints increases defense across the entire Microsoft Threat Protection. By unifying and enabling signal-sharing across Microsoft’s security services, Microsoft Threat Protection secures identities, endpoints, email and data, apps, and infrastructure.

Geoff McDonald (@glmcdona),Microsoft Defender ATP Research team
with Taylor Spangler, Windows Data Science team


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Follow us on Twitter @MsftSecIntel.

Go to Original Article
Author: Microsoft News Center

For Sale – ASUS ROG ZEPHYRUS GTX 1070 Max-Q G-SYNC Gaming Laptop

Hi Guys

Selling my Asus ROG Zephyrus GTX 1070 Max-Q G-SYNC Gaming Laptop

Incredible machine however I’ve decided to sell since investing in a 4K TV & PS4 for comfier gaming!

Purchased in April of this year from Scan, comes with box and all that good stuff.

Full specs are listed below.

Pictures included below however I will provide better images later today.

Model GX501VS-GZ058T Zephyrus
Processor
CPU Type Intel Core i7 7700HQ
CPU Cores Quad Core
CPU Threads 8
CPU Speed 2.8 GHz
CPU Boost 3.8 GHz
Unlocked/Overclockable CPU No
Display
Screen Size 15.6″
Screen Type LCD (LED Backlit), Wide Viewing Angle, Anti-Glare
Touchscreen No
Resolution 1920×1080 (Full HD)
Refresh Rate 120 Hz
Response Time
NVIDIA G-Sync Technology Yes
AMD FreeSync Technology No
RAM Memory
Memory 16GB (8GB On-Board + 1x 8GB In-Slot)
Memory Type On-Board + DDR4 SO-DIMM
Memory Speed DDR4 – 2400
Maximum Memory Supported 24GB (8GB On-Board + 1x 16GB In-Slot)
Storage Drives
HDD Capacity N/A
SSD Capacity 512 GB
Storage Type SSD (M.2 PCIe NVMe)
Graphics
Graphics Chipset NVIDIA GeForce GTX 1070
Graphics Memory 8GB
Graphics Memory Type GDDR5
GPU Cores/Streams/Execution Units 2048
Graphics Core Clock
Graphics Boost Clock
Graphics Memory Clock
Graphics Memory Bus 256 Bit
Multi-GPU NVIDIA SLi No
VR Ready Yes
NVIDIA Max-Q Design Yes
I/O
I/O

  • 1 x 2-in-1 Audio Jack
  • 1 x DC-In Jack
  • 1 x HDMI 2.0
  • 1 x Kensington Lock Slot
  • 4 x USB 3.1 Gen1 Type-A
  • 1 x USB 3.1 Type-C/Thunderbolt 3
  • 1 x WiFi 802.11ac/BT4.1 Module

Other Laptop Specifications
Optical Drive N/A
Card Reader No
Bluetooth Yes
Webcam Yes
Built-in microphone Yes
Keyboard Type/Language Chiclet / UK / RGB LED
Battery 4-cell
Security Features
Colour Black/ Metal
Operating System Windows 10 Home (64 bit)
Accessories Included in Box
Notebook Dimensions 379 x 16.9 ~ 17.8 x 262 mm (WxHxD)
Weight (Inc. Battery) 2.24 kg

IMG_6664.JPG

IMG_6667.JPG

IMG_6668.JPG

Price and currency: 1400
Delivery: Delivery cost is included within my country
Payment method: Paypal, Bank Transfer or Cash
Location: WIGAN
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

GPU-buffed servers advance Cisco’s AI agenda

Cisco Systems is the latest hardware vendor to offer gear tuned for AI and machine learning-based workloads.

Competition to support AI and machine workloads continues to heat up. Earlier this year archrivals Dell Technologies Inc., Hewlett Packard Enterprise and IBM rolled out servers designed to optimize performance of AI and machine learning workloads. Many smaller vendors are chasing this market as well.

“This is going to be a highly competitive field going forward with everyone having their own solution,” said Jean Bozman, vice president and principal analyst at Hurwitz & Associates. “IT organizations will have to figure out, with the help of third-party organizations, how to best take advantage of these new technologies.”

Cisco AI plan taps Nvidia GPUs

The Cisco UCS C480 ML M5 rack server, the company’s first tuned to run AI workloads, contains Nvidia Tesla V100 Tensor Core GPUs and NVLink to boost performance, and works with neural networks and large data sets to train computers to carry out complex tasks, according to the company. It works with Cisco Intersight, introduced last year, which allows IT professionals to automate policies and operations across their infrastructure from the cloud.

This Cisco AI server will ship sometime during this year’s fourth quarter. Cisco Services will offer technical support for a range of AI and machine learning capabilities.

Cisco intends to target several different industries with the new system. Financial services companies can use it to detect fraud and algorithmic trading, while healthcare companies can enlist it to deliver insights and diagnostics, improve medical image classification and speed drug discovery and research.

Server hardware makers place bets on AI

The market for AI and machine learning, particularly the former, represents a rich opportunity for systems vendors over the next year or two. Only 4% of CIOs said they have implemented AI projects, according to a Gartner study earlier this year. However, some 46% have blueprints in place to implement such projects, and many of them have kicked off pilot programs.

[AI and machine learning-based servers are] going to be a highly competitive field going forward with everyone having their own solution.
Jean Bozmanvice president and principal analyst, Hurwitz & Associates

AI and machine learning offers IT shops more efficient ways to address complex issues, but will significantly affect their underlying infrastructure and processes. Larger IT shops must heavily invest in training and the education of existing employees in how to use the technologies, the Gartner report stated. They also must upgrade existing infrastructure before they deploy production-ready AI and machine learning workloads. Enterprises will need to retool infrastructure to find ways to more efficiently handle data.

“All vendors will have the same story about data being your most valuable asset and how they can handle it efficiently,” Bozman said. “But to get at [the data] you first have to break down the data silos, label the data to get at it efficiently, and add data protection.”

Only after this prep work can IT shops take full advantage of AI-powered hardware-software tools.

“No matter how easy some of these vendors say it is to implement their integrated solutions, IT [shops] have more than a little homework to do to make it all work,” one industry analyst said. “Then you are ready to get the best results from any AI-based data analytics.”

For Sale – ASUS ROG ZEPHYRUS GTX 1070 Max-Q G-SYNC Gaming Laptop

Hi Guys

Selling my Asus ROG Zephyrus GTX 1070 Max-Q G-SYNC Gaming Laptop

Incredible machine however I’ve decided to sell since investing in a 4K TV & PS4 for comfier gaming!

Purchased in April of this year from Scan, comes with box and all that good stuff.

Full specs are listed below.

Pictures included below however I will provide better images later today.

Model GX501VS-GZ058T Zephyrus
Processor
CPU Type Intel Core i7 7700HQ
CPU Cores Quad Core
CPU Threads 8
CPU Speed 2.8 GHz
CPU Boost 3.8 GHz
Unlocked/Overclockable CPU No
Display
Screen Size 15.6″
Screen Type LCD (LED Backlit), Wide Viewing Angle, Anti-Glare
Touchscreen No
Resolution 1920×1080 (Full HD)
Refresh Rate 120 Hz
Response Time
NVIDIA G-Sync Technology Yes
AMD FreeSync Technology No
RAM Memory
Memory 16GB (8GB On-Board + 1x 8GB In-Slot)
Memory Type On-Board + DDR4 SO-DIMM
Memory Speed DDR4 – 2400
Maximum Memory Supported 24GB (8GB On-Board + 1x 16GB In-Slot)
Storage Drives
HDD Capacity N/A
SSD Capacity 512 GB
Storage Type SSD (M.2 PCIe NVMe)
Graphics
Graphics Chipset NVIDIA GeForce GTX 1070
Graphics Memory 8GB
Graphics Memory Type GDDR5
GPU Cores/Streams/Execution Units 2048
Graphics Core Clock
Graphics Boost Clock
Graphics Memory Clock
Graphics Memory Bus 256 Bit
Multi-GPU NVIDIA SLi No
VR Ready Yes
NVIDIA Max-Q Design Yes
I/O
I/O

  • 1 x 2-in-1 Audio Jack
  • 1 x DC-In Jack
  • 1 x HDMI 2.0
  • 1 x Kensington Lock Slot
  • 4 x USB 3.1 Gen1 Type-A
  • 1 x USB 3.1 Type-C/Thunderbolt 3
  • 1 x WiFi 802.11ac/BT4.1 Module

Other Laptop Specifications
Optical Drive N/A
Card Reader No
Bluetooth Yes
Webcam Yes
Built-in microphone Yes
Keyboard Type/Language Chiclet / UK / RGB LED
Battery 4-cell
Security Features
Colour Black/ Metal
Operating System Windows 10 Home (64 bit)
Accessories Included in Box
Notebook Dimensions 379 x 16.9 ~ 17.8 x 262 mm (WxHxD)
Weight (Inc. Battery) 2.24 kg

IMG_6664.JPG

IMG_6667.JPG

IMG_6668.JPG

Price and currency: 1400
Delivery: Delivery cost is included within my country
Payment method: Paypal, Bank Transfer or Cash
Location: WIGAN
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.