For Sale – Lenovo ThinkCentre M93p, i5 4690K 3.5-3.9 GHz, Blu-Ray, Windows 10 Pro,

ThinkCentre M93, which boasts the latest processers, innovative productivity features. M93p and adds Intel® vPro™ to optimize remote manageability.

Intel Core i5 4690K 3.5 GHz Quad Core CPU (K for overclockable) Max Turbo Frequency 3.9 GHz

Onboard HD Audio
Intel HD 4600 GRFX 2 GB (on the CPU)
Onboard 2 x Dual Display Port

This is a very fast capable PC. It’s actually a Business Workstation described as cutting edge computing for large enterprise by Lenovo.

I bought this to keep for myself hence some of the upgrades. I swapped out the PSU to accommodate the GRFX Card that I already had but I decided to get a SFF M93p instead so this is now surplus to requirements.

Go to Original Article

Data silos and culture lead to data transformation challenges

It’s not as easy as it should be for many users to make full use of data for data analytics and business intelligence use cases, due to a number of data transformation challenges.

Data challenges arise not only in the form of data transformation problems, but also with broader strategic concerns about how data is collected and used.

Culture and data strategy within organizations are key causal factors of data transformation challenges, said Gartner analyst Mike Rollings.

“Making data available in various forms and to the right people at the right time has always been a challenge,” Rollings said. “The bigger barrier to making data available is culture.”

The path to overcoming data challenges is to create a culture of data and fully embrace the idea of being a data-driven enterprise, according to Rollings.

Rollings has been busy recently talking about the challenges of data analytics, including taking part in a session at the Gartner IT Symposium Expo from Oct. 20-24 in Orlando, where he also detailed some of the findings from the Gartner CDO (Chief Data Officer) survey.

Among the key points in the study is that most organizations have not included data and analytics as part of documented corporate strategies.

Making data available in various forms and to the right people at the right time has always been a challenge.
Mike RollingsAnalyst, Gartner

“The primary challenge is that data and data insights are not a central part of business strategy,” Rollings said.

Often, data and data analytics are actually just byproducts of other activities, rather than being the core focus of a formal data-driven architecture, he said. In Rollings’ view, data and analytics should be considered assets that can be measured, managed and monetized.

“When we talk about measuring and monetizing, we’re really saying, do you have an intentional process to even understand what you have,” he said. “And do you have an intentional process to start to evaluate the opportunities that may exist with data, or with analysis that could fundamentally change the business model, customer experience and the way decisions are made.”

Data transformation challenges

The struggle to make the data useful is a key challenge, said Hoshang Chenoy, senior manager of marketing analytics at San Francisco-based LiveRamp, an identity resolution software vendor.

Among other data transformation challenges is that many organizations still have siloed deployments, where data is collected and remains in isolated segments.

“In addition to having siloed data within an organization, I think the biggest challenge for enterprises to make their data ready for analytics are the attempts at pulling in data that has previously never been accessed, whether it’s because the data exists in too many different formats or for privacy and security reasons,” Chenoy said. “It can be a daunting task to start on a data management project but with the right tech, team and tools in place, enterprises should get started sooner rather than later.”

How to address the challenges

With the data warehouse and data lake technologies, the early promise was making it easier to use data.

But despite technology advances, there’s still a long way to go to solving data transformation challenges, said Ed Thompson, CTO of Matillion, a London-based data integration vendor that recently commissioned a survey on data integration problems.

The survey of 200 IT professionals found that 90% of organizations see making data available for insights as a barrier. The study also found a rapid rate of data growth of up to 100% a month at some organizations.

When an executive team starts to get good quality data, what typically comes back is a lot of questions that require more data. The continuous need to ask and answer questions is the cycle that is driving data demand.

“The more data that organizations have, the more insight that they can gain from it, the more they want, and the more they need,” Thompson said.

Go to Original Article

Recovering from ransomware soars to the top of DR concerns

The rise of ransomware has had a significant effect on modern disaster recovery, shaping the way we protect data and plan a recovery. It does not bring the same physical destruction of a natural disaster, but the effects within an organization — and on its reputation — can be lasting.

It’s no wonder that recovering from ransomware has become such a priority in recent years.

It’s hard to imagine a time when ransomware wasn’t a threat, but while cyberattacks date back as far as the late 1980s, ransomware in particular has had a relatively recent rise in prominence. Ransomware is a type of malware attack that can be carried out in a number of ways, but generally the “ransom” part of the name comes from one of the ways attackers hope to profit from it. The victim’s data is locked, often behind encryption, and held for ransom until the attacker is paid. Assuming the attacker is telling the truth, the data will be decrypted and returned. Again, this assumes that the anonymous person or group that just stole your data is being honest.

“Just pay the ransom” is rarely the first piece of advice an expert will offer. Not only do you not know if payment will actually result in your computer being unlocked, but developments in backup and recovery have made recovering from ransomware without paying the attacker possible. While this method of cyberattack seems specially designed to make victims panic and pay up, doing so does not guarantee you’ll get your data back or won’t be asked for more money.

Disaster recovery has changed significantly in the 20 years TechTarget has been covering technology news, but the rapid rise of ransomware to the top of the potential disaster pyramid is one of the more remarkable changes to occur. According to a U.S. government report, by 2016 4,000 ransomware attacks were occurring daily. This was a 300% increase over the previous year. Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon. In this brief retrospective, take a look back at the major attacks that made headlines, evolving advice and warnings regarding ransomware, and how organizations are fighting back.

In the news

The appropriately named WannaCry ransomware attack began spreading in May 2017, using an exploit leaked from the National Security Agency targeting Windows computers. WannaCry is a worm, which means that it can spread without participation from the victims, unlike phishing attacks, which require action from the recipient to spread widely.

Ransomware recovery has changed the disaster recovery model, and it won’t be going away any time soon.

How big was the WannaCry attack? Affecting computers in as many as 150 countries, WannaCry is estimated to have caused hundreds of millions of dollars in damages. According to cyber risk modeling company Cyence, the total costs associated with the attack could be as high as $4 billion.

Rather than the price of the ransom itself, the biggest issue companies face is the cost of being down. Because so many organizations were infected with the WannaCry virus, news spread that those who paid the ransom were never given the decryption key, so most victims did not pay. However, many took a financial hit from the downtime the attack caused. Another major attack in 2017, NotPetya, cost Danish shipping giant A.P. Moller-Maersk hundreds of millions of dollars. And that’s just one victim.

In 2018, the city of Atlanta’s recovery from ransomware ended up costing more than $5 million, and shut down several city departments for five days. In the Matanuska-Susitna borough of Alaska in 2018, 120 of 150 servers were affected by ransomware, and the government workers resorted to using typewriters to stay operational. Whether it is on a global or local scale, the consequences of ransomware are clear.

Ransomware attacks
Ransomware attacks had a meteoric rise in 2016.

Taking center stage

Looking back, the massive increase in ransomware attacks between 2015 and 2016 signaled when ransomware really began to take its place at the head of the data threat pack. Experts not only began emphasizing the importance of backup and data protection against attacks, but planning for future potential recoveries. Depending on your DR strategy, recovering from ransomware could fit into your current plan, or you might have to start considering an overhaul.

By 2017, the ransomware threat was impossible to ignore. According to a 2018 Verizon Data Breach Report, 39% of malware attacks carried out in 2017 were ransomware, and ransomware had soared from being the fifth most common type of malware to number one.

Verizon malware report
According to the 2018 Verizon Data Breach Investigations Report, ransomware was the most prevalent type of malware attack in 2017.

Ransomware was not only becoming more prominent, but more sophisticated as well. Best practices for DR highlighted preparation for ransomware, and an emphasis on IT resiliency entered backup and recovery discussions. Protecting against ransomware became less about wondering what would happen if your organization was attacked, and more about what you would do when your organization was attacked. Ransomware recovery planning wasn’t just a good idea, it was a priority.

As a result of the recent epidemic, more organizations appear to be considering disaster recovery planning in general. As unthinkable as it may seem, many organizations have been reluctant to invest in disaster recovery, viewing it as something they might need eventually. This mindset is dangerous, and results in many companies not having a recovery plan in place until it’s too late.

Bouncing back

While ransomware attacks may feel like an inevitability — which is how companies should prepare — that doesn’t mean the end is nigh. Recovering from ransomware is possible, and with the right amount of preparation and help, it can be done.

The modern backup market is evolving in such a way that downtime is considered practically unacceptable, which bodes well for ransomware recovery. Having frequent backups available is a major element of recovering, and taking advantage of vendor offerings can give you a boost when it comes to frequent, secure backups.

Vendors such as Reduxio, Nasuni and Carbonite have developed tools aimed at ransomware recovery, and can have you back up and running without significant data loss within hours. Whether the trick is backdating, snapshots, cloud-based backup and recovery, or server-level restores, numerous tools out there can help with recovery efforts. Other vendors working in this space include Acronis, Asigra, Barracuda, Commvault, Datto, Infrascale, Quorum, Unitrends and Zerto.

Along with a wider array of tech options, more information about ransomware is available than in the past. This is particularly helpful with ransomware attacks, because the attacks in part rely on the victims unwittingly participating. Whether you’re looking for tips on protecting against attacks or recovering after the fact, a wealth of information is available.

The widespread nature of ransomware is alarming, but also provides first-hand accounts of what happened and what was done to recover after the attack. You may not know when ransomware is going to strike, but recovery is no longer a mystery.

Go to Original Article

How to manage Server Core with PowerShell

After you first install Windows Server 2019 and reboot, you might find something unexpected: a command prompt.

While you’re sure you didn’t select the Server Core option, Microsoft now makes it the default Windows Server OS deployment for its smaller attack surface and lower system requirements. While you might remember DOS commands, those are only going to get you so far. To deploy and manage Server Core, you need to build your familiarity with PowerShell to operate this headless flavor of Windows Server.

To help you on your way, you will want to build your knowledge of PowerShell and might start with the PowerShell integrated scripting environment (ISE). PowerShell ISE offers a wealth of features for the novice PowerShell user, including auto complete of commands to context-colored commands to step you through the scripting process. The problem is PowerShell ISE requires a GUI or the “full” Windows Server. To manage Server Core, you have the command window and PowerShell in its raw form.

Start with the PowerShell basics

To start, type in powershell to get into the environment, denoted by the PS before the C: prompt. A few basic DOS commands will work, but PowerShell is a different language. Before you can add features and roles, you need to set your IP and domain. It can be done in PowerShell, but this is laborious and requires a fair amount of typing. Instead, we can take a shortcut and use sconfig to compete the setup. After that, we can use PowerShell for additional administrative work.

PowerShell uses a verb-noun format, called cmdlets, for its commands, such as Install-WindowsFeature or Get-Help. The verbs have predefined categories that are generally clear on their function. Some examples of PowerShell cmdlets are:

  • Install: Use this PowerShell verb to install software or some resource to a location or initialize an install process. This would typically be done to install a windows feature such as Dynamic Host Configuration Protocol (DHCP).
  • Set: This verb modifies existing settings in Windows resources, such as adjusting networking or other existing settings. It also works to create the resource if it did not already exist.
  • Add: Use this verb to add a resource or setting to an existing feature or role. For example, this could be used to add a scope onto the newly installed DHCP service.
  • Get: This is a resource retriever for data or contents of a resource. You could use Get to present the resolution of the display and then use Set to change it.

To install DHCP to a Server Core deployment with PowerShell, use the following commands.

Install the service:

Install-WindowsFeature –name 'dhcp'

Add a scope for DHCP:

Add-DhcpServerV4Scope –name "Office" –StartingRange -EndRange -SubnetMask

Set the lease time:

Set-DHCPSet-DhcpServerv4Scope -ScopeId -LeaseDuration 1.00:00:00

Check the DHCP IPv4 scope:


Additional pointers for PowerShell newcomers

Each command has a purpose and means you have to know the syntax, which is the hardest part of learning PowerShell. Not knowing what you’re looking for can be very frustrating, but there is help. The Get-Help displays the related commands for use with that function or role.

Part of the trouble for new PowerShell users is this can still be overwhelming to memorize all the commands, but there is a shortcut. As you start to type a command, the tab key auto-completes the PowerShell commands. For example, if you type Get-Help R and press the tab key, PowerShell will cycle through the commands, such as the command Remove-DHCPServerInDC, see Figure 1. When you find the command you want and hit enter, PowerShell presents additional information for using that command. Get-Help even supports wildcards, so you could type Get-Help *dhcp* to get results for commands that contain that phrase.

Get-Help command
Figure 1. Use the Get-Help command to see the syntax used with a particular PowerShell cmdlet.

The tab function in PowerShell is a savior. While this approach is a little clumsy, it is a valuable asset in a pinch due to the sheer number of commands to remember. For example, a base install of Windows 10 includes Windows PowerShell 5.1 which features more than 1,500 cmdlets. As you install additional PowerShell modules, you make more cmdlets available.

There are many PowerShell books, but do you really need them? There are extensive libraries of PowerShell code that are free to manipulate and use. Even walking through a Microsoft wizard gives the option to create the PowerShell code for the wizard you just ran. As you learn where to find PowerShell code, it becomes less of a process to write a script from scratch but more of a modification of existing code. You don’t have to be an expert; you just need to know how to manipulate the proper fields and areas.

Outside of typos, the biggest stumbling block for most beginners is not reading the screen. PowerShell does a mixed job with its error messages. The type is red when something doesn’t work, and PowerShell will give the line and character where the error occurred.

In the example in Figure 2, PowerShell threw an error due to the extra letter s at the end of the command Get-WindowsFeature. The system didn’t recognize the command, so it tagged the entire command rather than the individual letter, which can be frustrating for beginners.

PowerShell error message
Figure 2. When working with PowerShell on the command line, you don’t get precise locations of where an error occurred if you have a typo in a cmdlet name.

The key is to review your code closely, then review it again. If the command doesn’t work, you have to fix it to move forward. It helps to stop and take a deep breath, then slowly reread the code. Copying and pasting a script from the web isn’t foolproof and can introduce an error. With some time and patience, and some fundamental PowerShell knowledge of the commands, you can get moving with it a lot quicker than you might have thought.

Go to Original Article

New telephony controls coming to Microsoft Teams admin center

Microsoft will add several telephony controls to the Microsoft Teams admin center in the coming months, a significant move in the vendor’s campaign to retire Skype for Business Online by mid-2021.

Admins will be able to build, test and manage custom dial plans through the Teams portal. Additionally, organizations that use Microsoft Calling Plan will be able to create and assign phone numbers and designate emergency addresses for users.

Currently, admins can only perform those tasks in Teams through the legacy admin center for Skype for Business Online. Microsoft has been gradually moving controls to the Teams admin center, with telephony controls among the last to switch over.

Microsoft plans to begin adding the new telephony controls to the Teams admin center in November, according to the vendor’s Office 365 Roadmap webpage. The company will also introduce some advanced features it didn’t support in Skype for Business Online, a cloud-based app within Office 365.

The update will let admins configure what’s known as dynamic emergency calling. The feature — supported only in the on-premises version of Skype for Business — automatically detects a user’s location when they place a 911 call. It then transmits that information to emergency officials.

The admin center for Skype for Business Online is “fairly rudimentary,” said Tom Arbuthnot, principal solutions architect at Modality Systems, a Microsoft-focused systems integrator. The new console for Teams provides advancements like the ability to sort and filter users and phone numbers.

“All of these little features add up to making a more friendly voice platform for an administrator,” Arbuthnot said. “They are getting closer and closer to everything being administered in the Teams admin center.”

Microsoft Teams still missing advanced calling controls, features

The superior design of the admin center notwithstanding, Teams still lacks crucial tools for organizations too large to use the management console.

For those enterprises, Teams PowerShell is the go-to tool for auto-configuring settings on a large scale using code-based commands. However, PowerShell cannot do everything that the Teams admin center can do. Microsoft has also yet to release APIs that would allow a third-party consultant to help manage a Fortune 500 company’s transition to Teams calling.

“When you’re up to hundreds of thousands of seats, you don’t really want to be going to an admin center and manually administrating,” Arbuthnot said. “The PowerShell and APIs tend to lag a little bit.”

A lack of parity between the telephony features of Skype for Business and Teams had been one of the biggest roadblocks preventing organizations from fully transitioning from the old to the new platform.

But at this point, Teams should be suitable for everyone except those with the most complex needs, such as receptionists, Arbuthnot said.

Other features that Microsoft is planning include compliance call recording, virtual desktop infrastructure support and contact center integrations.

Go to Original Article

Adsterra still connected to malvertising campaign, despite denials

Despite a public pledge of “zero tolerance” for malicious activity, a digital ad network previously tied to major malvertising campaigns was still connecting to a malicious IP address involved in traffic hijacking.

Adsterra, an ad network based in Cyprus, was implicated in an extensive malvertising campaign discovered by Check Point Software Technologies in 2018. Adsterra claimed to have blocked the malicious activity and improved its defenses, but a SearchSecurity investigation discovered the ad network continued connecting to a malicious server used in the campaign as recently as last month.

The campaign originally began with a party, dubbed “Master134” by Check Point researchers, posing as a legitimate publisher on Adsterra’s ad network platform. Master134 used more than 10,000 compromised WordPress sites to redirect visitors to a malicious sever in Ukraine with the IP address The hijacked traffic was sold on Adsterra’s RTB platform to other ad networks, where it was sold to other networks before being sold yet again to threat actors running several well-known malicious sites and exploit kits.

In Check Point’s report, researchers described Adsterra as “infamous” and said the ad network had a direct relationship with “Master134” by paying the threat actor for the hijacked traffic. Lotem Finkelsteen, Check Point’s threat intelligence analysis team leader and co-author of the report, told SearchSecurity that Adsterra either knew it was accepting hijacked traffic or chose to ignore the signs.

Adsterra responded to the report with a blog post titled “Zero Tolerance for Illegal Traffic Sources,” in which the company denied the allegations that it was knowingly involved with Master134. The company also blamed other third-party ad networks, even though Check Point reported Adsterra received the traffic directly from Master134’s IP address.

Adsterra Master134 malvertising campaign
The redirection/infection chain of the Master134 campaign.

“[W]e would like to emphasize that we do not accept traffic from hacked/hijacked sites. We have zero tolerance for illegal traffic sources,” the statement read. “All publishers’ accounts that were mentioned in that article have been suspended. Malware ads are prohibited in Adsterra Network and we have a monitor system that checks all campaigns and stops all suspicious activity.”

Despite the denials and the supposed actions taken by Adsterra, a SearchSecurity investigation found the ad network was still connecting to the IP address as recently as last month. When confronted with this information, Adsterra offered a series of explanations that called into question the company’s efforts to prevent malvertising and ad fraud.

Master134 connections

Open source intelligence tools revealed the IP address, which is still active, was connecting to, a redirection domain owned and operated by Adsterra, during July and August of this year.

SearchSecurity emailed Adsterra in August about the domain’s connection to the Master134 IP address and received a reply from the company’s support team, which said the Adsterra policy team would investigation the issue. The email also said the company “considers the [Master134] case closed.”

We sent a follow-up email to Adsterra asking for more information about how it bans malicious accounts and what steps the company takes to prevent repeat offenders from abusing Adsterra’s self-service platform.

We serve hundreds of millions of ad impressions per day and we don’t need any illegal traffic because our advertisers simply won’t accept it and pay for it.
Adsterra Support Team

“When we ‘ban an account’ in our system we block the account and all payments associated with that account. We also block all ads being displayed to that account,” the support team wrote. “We investigate all incoming reports on illegal activities on our network and do our best to prevent them from happening. We utilize special software (both in-house and 3rd party) to scan and monitor ads and traffic 24/7. Furthermore, after the incident with ‘Master134’ we have purchased additional 3rd party software to scan our feed, but you should understand that it is always a cat-mouse game when it comes to catching a ‘bad actor’.”

SearchSecurity also asked Adsterra about the allegations that the ad network was knowingly accepting traffic from malicious sources like Master134. “We serve hundreds of millions of ad impressions per day and we don’t need any illegal traffic because our advertisers simply won’t accept it and pay for it,” the support team wrote.

While the’s connections to Master134 appeared to end following the conversation with Adsterra’s support team, SearchSecurity discovered a second domain owned by the company,, was also connecting to the malicious IP address. According to RiskIQ’s Passive Total Community Edition, the connections from to the domain began in August shortly after the connections to ceased.

Adsterra RiskIQ PassiveTotal
RiskIQ’s PassiveTotal shows Master134 connected to second Adsterra domain in August and September.

SearchSecurity emailed Adsterra again several times about the second domain, but the company did not respond initially. We then reached out to the ad network’s official Twitter account and asked why the Adsterra domains were still connecting to the Master134 server. In a Twitter exchange, Adsterra said the Master134 threat actors set up a new account, which was also banned. The ad network also said it “blacklisted all traffic with this IP in referrer header.”

“They’ll think twice before sending traffic to our network after no payment,” Adsterra said.

We asked why Adsterra hadn’t taken the step of banning the IP address last year following Check Point’s Master134 and the resulting press coverage, especially since the company said it had “zero tolerance” for such activity.

“Since the publisher’s account was banned without a payout and they removed our link shortly after, we considered they understood their traffic is not welcome here. It took them a while to sign up again,” Adsterra tweeted. “Please also note that blacklisting this IP in a referrer header does not give 100% protection — a portion of traffic can be redirected with no referrer. However, we admit this could have been done before as a precaution. Thus, we have updated our internal policies accordingly.”

Adsterra said the malicious account didn’t received its payment due, but the company couldn’t say whether or not the fraudulent accounts operated by Master134 had ever received payment from the company.

SearchSecurity requested more information about the accounts and the steps Adsterra took to stop the malicious activity on its websites. The ad network responded with information similar to what it previously tweeted but did not address those questions directly.

“The executive team has been notified of this issue,” Adsterra support team wrote. “However, we find this case closed and the new account has been banned as well.”

According to RiskIQ’s PassiveTotal, the connections from Master134 to the domain ended on Sept. 14, the same day as the above email. Adsterra hasn’t responded to further requests from SearchSecurity.

Adsterra’s prevention methods questioned

Security vendors in the ad fraud and malvertising prevention market said Adsterra’s method of blacklisting the IP address is a largely useless approach and that stronger measures are needed to stop threat actors like Master134.

Hagai Shechter, CEO of Fraudlogix, an ad fraud prevention vendor based in Hallandale Beach, Fla., said restricting IP addresses via HTTP headers isn’t effective because — as Adsterra itself pointed out — threat actors can remove malicious IP addresses from their headers and make HTTP requests with “no-referrer.” In addition, Schechter said public blacklists, even if implemented effectively at the firewall level, are often outdated.

“It’s rare to find a publicly available IP blacklist list that’s going to be recent and that will have the good stuff in there,” he said.

It’s also unclear why Adsterra’s additional investment in ad security and new scanners didn’t prevent the Master134 IP address from repeatedly connecting to the ad network’s domains, given the address was known to be malicious. According to a July blog post titled “We Keep You Safe,” Adsterra said it felt “bound to take action” and announced it had added a second ad security scanner from a vendor called AdSecure to further reduce fraud and malvertising.

However, AdSecure was launched in 2017 by a company called ExoGroup, based in Barcelona. ExoGroup is also the parent company of ad network ExoClick that, like Adsterra, was implicated in the Master134 campaign in 2018, as well as previous malvertising campaigns. According to AdSecure’s website, the company’s “partners” include several ad networks including ExoClick, Adsterra and AdKernel, which was also connected to the Master134 campaign.

SearchSecurity reached out to AdSecure to learn more about how its flagship product worked and its relationship to ExoClick and the other ad networks. The company did not respond. [UPDATE: Adsecure emailed a statement to SearchSecurity the day after this article was published. The statement is contained below.]

SearchSecurity spoke with GeoEdge, the other ad security vendor used by Adsterra, which declined to address the ad network directly. GeoEdge CEO Amnon Siev said that in general, some ad network clients choose to essentially ignore the alerts that GeoEdge provides about malicious activity and allow suspicious traffic and IP addresses on their platforms.

Schechter agreed and said clients have full control over how they use Fraudlogix’s products and some simply choose to look the other way when it comes to signs of click fraud and malvertising.

“That absolutely happens,” he said. “The fuel for the industry is volume. If Google blocks out 10% of their ad traffic, they can still survive, but when you’re a smaller network, that 10% could be the difference between staying in business or not.”

Siev added that he believes AdSecure isn’t an effective solution for preventing ad fraud and malvertising. “I’ve never tested their solution,” he said, “but I know from talking to customers that have switched from them to us what gaps are there.”

He also criticized AdSecure’s connection to ExoClick. “We continue to flag many of [ExoClick’s] campaigns,” Siev said. “They’ve pushed back on us and say there’s no malicious activity in their campaigns.”

In a statement sent to SearchSecurity on Nov. 1, Adsecure sales manager Bryan Taylor wrote “AdSecure is a reporting tool, what clients do with those reports and the measures they implement to prevent fraudulent actors is their decision.

AdSecure is part of Exogroup and is born out of the experience that ExoClick has dealing with advertising fraud. ExoClick has been fighting advertising fraud since 2006 and has used the services of GeoEdge and others over the years. Unfortunately, most of these companies rely on outdated technology and they have proven inefficient to detect many types of fraud, especially the most recent ones, such as push lockers. This triggered Exogroup to invest into the development of a new technology, that would address the wide scope of issues that plague the online advertising ecosystem today,” Taylor wrote.

There is no silver bullet to address the issue of malvertising. And there is no such thing as 100% safe. There is a very good reason why people setup an alarm system in their home. But even then, some more ambitious criminals might still break a window and give it a go. Do platforms and networks have issues with malicious activity? Yes, absolutely. And GeoEdge, RiskIQ, AdSecure or any others would not exist if that was not the case,” Taylor added. “If we refer to your quote from Amnon Siev, he admits himself “I’ve never tested their solution” so we don’t think this even deserves a response. What matters to us are the results that the partners get from AdSecure, and the hundreds of malvertising issues that we prevent on a daily basis. And all of the companies fighting this fight are good companies to have on the market.”

It’s unclear if other Adsterra domains are connecting to Master134; the IP address connects to thousands of domains, including a litany of WordPress sites as well as several ad network platforms, and Adsterra owns and operates a significant number of domains. For example,, an online database of websites and IP addresses, shows more than 400 domains owned by Ad Market Limited, the corporate name of Adsterra.

Go to Original Article

Windows expands support for robots – Windows Developer Blog

Robotics technology is moving fast. A lot has happened since Microsoft announced an experimental release of Robot Operating System (ROS™)[1] for Windows at last year’s ROSCON in Madrid. ROS support became generally available in May 2019, which enabled robots to take advantage of the worldwide Windows ecosystem—a rich device platform, world-class developer tools, integrated security, long-term support and a global partner network. In addition, we gave access to advanced Windows features like Windows Machine Learning and Vision Skills and provided connectivity to Microsoft Azure IoT cloud services.At this year’s ROSCON event in Macau, we are happy to announce that we’ve continued advancing our ROS capabilities with ROS/ROS2 support, Visual Studio Code extension for ROS and Azure VM ROS template support for testing and simulation. This makes it easier and faster for developers to create ROS solutions to keep up with current technology and customer needs. We look forward to adding robots to the 900 million devices running on Windows 10 worldwide.

In July, Microsoft published a preview of the VS Code extension for ROS based on a community-implemented release. Since then we’ve been expanding its functionality—adding support for Windows, debugging and visualization to enable easier development for ROS solutions. The extension supports:
Automatic environment configuration for ROS development
Starting, stopping and monitoring of ROS runtime status
Automatic discovery of build tasks
One-click ROS package creation
Shortcuts for rosrun and roslaunch
Linux ROS development
In addition, the extension adds support for debugging a ROS node leveraging the C++ and Python extensions. Currently in VS Code, developers can create a debug configuration for ROS to attach to a ROS node for debugging. In the October release, we are pleased to announce that the extension supports debugging ROS nodes launched from roslaunch at ROS startup.

Visual Studio Code extension for ROS showing ROS core status and debugging experience for roslaunch.
Unified Robot Description Format (URDF) is an XML format for representing a robot model, and Xacro is an XML macro language to simplify URDF files. The extension integrates support to preview a URDF/Xacro file leveraging the Robot Web Tools, which helps ROS developers easily make edits and instantly visualize the changes in VS Code.

Visual Studio Code extension for ROS showing a preview of URDF.
For developers who are building ROS2 applications, the extension introduces ROS2 support including workspace discovery, runtime status monitor and built tool integration. We’d like to provide a consistent developer experience for both ROS and ROS2 and will continue to expand support based on community feedback.

With the move to the cloud, many developers have adopted agile development methods. They often want to deploy their applications to the cloud for testing and simulation scenarios when their development is complete. They iterate quickly and repeatedly deploy their solutions to the cloud. Azure Resource Manager template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for a project. To facilitate the cloud-based testing and deployment flow, we publish a ROS on Windows VM template that creates a Windows VM and installs the latest ROS on Windows build into the VM using the CustomScript extension. You can try it out here.

Microsoft is expanding support for ROS and ROS2, including creating Microsoft-supported ROS nodes and building and providing Chocolatey packages for the next releases of ROS (Noetic Ninjemys) and ROS2 (Eloquent Elusor).
Azure Kinect ROS Driver

Internal visualization of the Azure Kinect.
The Azure Kinect Developer Kit is the latest Kinect sensor from Microsoft. The Azure Kinect contains the same depth sensor used in the Hololens 2, as well as a 4K camera, a hardware-synchronized accelerometer & gyroscope (IMU), and a 7-element microphone array. Along with the hardware release, Microsoft made available a ROS node for driving the Azure Kinect and soon will support ROS2.
The Azure Kinect ROS Node emits a PointCloud2 stream, which includes depth information and color information, along with depth images, the raw image data from both the IR & RGB cameras and high-rate IMU data.

Colorized Pointcloud output of Azure Kinect in the tool rViz.
A Community contribution has also enabled body tracking! This links to the Azure Kinect Body Tracking SDK and outputs image masks of each tracked individual and poses of body tracking joints as markers.

A visualization of Skeletal Tracking in rViz.
You can order a Azure Kinect DK at the Microsoft Store, then get started using the Azure Kinect ROS node here.
Windows ML Tracking ROS Node
The Windows Machine Learning API enables developers to use pre-trained machine learning models in their apps on Windows 10 devices. This offers developers several benefits:
Low latency, real-time results: Windows can perform AI evaluation tasks using the local processing capabilities of the PC with hardware acceleration using any DirectX 12 GPU. This enables real-time analysis of large local data, such as images and video. Results can be delivered quickly and efficiently for use in performance intensive workloads like game engines, or background tasks such as indexing for search.
Reduced operational costs: Together with the Microsoft Cloud AI platform, developers can build affordable, end-to-end AI solutions that combine training models in Azure with deployment to Windows devices for evaluation. Significant savings can be realized by reducing or eliminating costs associated with bandwidth due to ingestion of large data sets, such as camera footage or sensor telemetry. Complex workloads can be processed in real-time on the edge with minimal sample data sent to the cloud for improved training on observations.
Flexibility: Developers can choose to perform AI tasks on device or in the cloud based on what their customers and scenarios need. AI processing can happen on the device if it becomes disconnected, or in scenarios where data cannot be sent to the cloud due to cost, size, policy or customer preference.
The Windows Machine Learning ROS node will hardware accelerate the inferencing of your Machine Learning models, publishing a visualization marker relative to the frame of image publisher. The output of Windows ML can be used for obstacle avoidance, docking or manipulation.

Visualizing the output of a model with Windows ML. Model used with permission:
Azure IoT Hub ROS Node
Enable highly secure and reliable communication between your IoT application and the devices it manages. Azure IoT Hub provides a cloud-hosted solution backend to connect virtually any device. Extend your solution from the cloud to the edge with per-device authentication, built-in device management and scaled provisioning.
The Azure IoT Hub ROS Node allows you to stream ROS Messages through Azure IoT Hub. These messages can be processed with an Azure Function, streamed to a Blob Store or processed through Azure stream analytics for anomaly detection. Additionally, the Azure IoT Hub ROS Node allows you to change properties in the ROS Parameter server using Dynamic Reconfigure with properties set on the Azure IoT Hub Device Twin.
Come learn more and see some of these technologies in action at ROSCON 2019 in Macau. We’re hosting a booth throughout the event (October 31 – November 1), as well as a talk on Friday afternoon. You can get started with ROS on Windows here.
[1] ROS is a trademark of Open Robotics

ConnectWise-Continuum buyout shakes up MSP software market

ConnectWise, a provider of software for managed services providers, has acquired its competitor Continuum.

The Continuum acquisition was announced today by ConnectWise CEO Jason Magee at his company’s annual user conference, IT Nation Connect, running from Oct. 30 to Nov. 1 in Orlando, Fla. The buyout, which is poised to shake up the MSP software market, accompanies the acquisition of ITBoost, an IT documentation vendor. ConnectWise also revealed a strategic partnership with partner relationship management software provider Webinfinity to help ConnectWise partners manage their vendor alliances.

“[The Continuum acquisition] allows ConnectWise to address the growing pains of our partners and some of those pains around talent and skills shortages … [and] continues to accelerate ConnectWise in the cybersecurity area,” Magee said in a press briefing.

ConnectWise and Continuum are owned by private equity investment firm Thoma Bravo. Thoma Bravo purchased ConnectWise in February. The private equity firm also owns MSP software players (and ConnectWise-Continuum competitors) SolarWinds and Barracuda Networks.

ConnectWise’s platform spans professional services automation, remote monitoring and management (RMM), and ‘configure, price and quote’ software. Continuum’s development of a global security operations center (SOC), network operations center and help desk technologies will be “complementary” to what ConnectWise does today, Magee said.

Jason Magee, CEO of ConnectWise Jason Magee

The future of ConnectWise and Continuum’s RMM platforms, ConnectWise Automate and Continuum Command, remains in question. Magee said the respective RMM platforms “will be maintained [separately] at this point.” After the IT Nation Connect 2019 event, the companies will begin working on its overall business plan and joint roadmaps, “which to this point we have not been able to dig into much due to regulatory restraints around getting government approval of making the deal happen and so on,” he said.

Magee suggested that in the short term ConnectWise-Continuum partners could see some innovations introduced to the Automate and Command platforms. He pointed to a few potential examples, such as making Command’s LogMeIn remote control available to ConnectWise partners and adding features of Command’s automation and patching capabilities to the Automate platform. He didn’t specify the timing around implementing any changes but said partners could expect to see some in early 2020.

Although the post-acquisition is still in the planning stage, Magee said Continuum’s CFO Geoffrey Willison will be brought as COO at ConnectWise, and the senior vice president of global service delivery, Tasos Tsolakis, will join as the senior vice president of service delivery “over all ConnectWise going forward.” Additionally, Magee said ConnectWise will hire a new CFO for the combined business.

“Until we have the rest of the best of the business plan done, it is business as usual,” Magee said.

Addressing two types of MSPs

Magee said that the ConnectWise-Continuum acquisition also serves to benefit “two mindsets” that have emerged among MSPs.

The first mindset is of the do-it-yourself MSPs that build their practices by partnering, buying platforms and tools, and hiring teams to manage and service their customers. The second mindset is of “the companies and people [that] just want to go hire the general contractor, and those people are asking for someone else to manage [their customers] for them, take the hassle out of having to do all that stuff within their company or themselves.”

This opens up a whole new world from a ConnectWise standpoint.
Jason MageeCEO, ConnectWise

“This opens up a whole new world from a ConnectWise standpoint,” Magee said.

For a few years, ConnectWise has been establishing a ‘connected ecosystem’ of third-party software integrations around its platform, and the company will remain committed to that strategy. “We are still committed to the power of choice for our partners and will continue with our API-first mindset, which allows for continued partnership with the 260 and growing vendor partnerships that we have out there,” Magee said. “These are all great options for those [MSPs] that like to do it themselves.”

When asked if Magee anticipated challenges in merging the ConnectWise and Continuum communities of MSP partners, he said he didn’t expect any problems but would address any issues that may crop up to ensure “we are doing right by the communities.”

“At the end of the day, there is so much good and greatness that comes from bringing these two together that the partner communities are going to benefit tremendously.”

ITBoost, Webinfinity and cybersecurity initiative

In a move similar to MSP software vendor Kaseya’s buyout of IT Glue, ConnectWise is purchasing documentation provider ITBoost. ConnectWise said the IT document tool will be integrated with its product suite.

Magee said the Webinfinity partnership will help ConnectWise launch ConnectWise Engage, a tool for channel firms for simplifying vendor relationship management. ConnectWise Engage aims to give partners “the ability to receive enablement content and material or solution stack information” from their supplier partners, he noted. Additionally, ConnectWise said the Webinfinity alliance will help centralize vendor-partner touch points for areas such as deal registration, multivendor support issues, co-marketing and SKU management.

ConnectWise today also revealed a cybersecurity initiative, which Magee is calling ‘Fight Back,’ to encourage vendors, platform providers, MSPs and MSP customers to up their security awareness and capabilities.

Magee noted that ConnectWise recently achieved SOC Type 2 certification and will mandate by early 2020 multifactor and two-factor authentication across its platforms. The company in August rolled out its Technology Solution Provider Information Sharing and Analysis Organization, a forum for MSPs to share threat intelligence and best practices. “This is an area that ConnectWise for years has strived to be better. We are not perfect by any means, but we strive to get better,” he said.

Go to Original Article

Salesforce Trailhead to roll out live training videos

Salesforce is promoting customer success by rolling out two new Trailhead features that will be available by the end of this year.

Salesforce will introduce live video trainings on Trailhead Live and new features to, the online resume feature designed to help job-seekers show off their skills and accomplishments using Trailhead. already features badges and certifications achieved using Trailhead. The new version will also highlight a person’s activity throughout the Salesforce ecosystem, such as contributions to user groups, what apps users download from the Salesforce AppExchange and reviews that users have posted. should help employers that want to be able to quantify whether job applicants have the skills they say they have, said Maribel Lopez, founder and principal analyst at Lopez Research.

“People used to be able to just say, ‘I know Salesforce,’ on their resume,” Lopez said. “I think one of the hardest things for employers is to understand whether anyone they hire is actually qualified in the things they say they are qualified in.”

Trailhead Live brings video instruction

Trailhead Live offers a new way for Salesforce users to learn with additional elements of community. Like other Trailhead courses, Trailhead Live courses are free.

The initial set of courses will include live coding and Salesforce certification preparation for administrators and others. Within two months of launch later this year, Salesforce said it expects Trailhead Live to offer more than 100 live and on-demand training courses. This will also include courses in so-called “soft skills,” such as how to interview for a job and public speaking.

Salesforce Trailhead screenshot
Salesforce plans to roll out live video training on Trailhead Live by year’s end.

Salesforce plans to have a big Trailhead presence at Dreamforce in San Francisco from Nov. 19 to 22, where the new Trailhead features will be on display.

Salesforce is doing this an acknowledgment that people learn differently, Lopez said.

“There are multiple ways people like to engage,” Lopez said. “It used to be you had a whiteboard and people took notes, but now we’re in a much more visual era and you want to be sure you’re reaching everyone.”

Inspired by Peloton

Salesforce said the design of Trailhead Live was inspired in part by Peloton, the company that offers live on-demand fitness courses via an internet-connected bicycle.

Seeing how people can engage with others without having to go to a classroom was an inspiration.
Kris LandeVice president of marketing, Salesforce

“We definitely looked at consumer applications like Peloton,” said Kris Lande, vice president of marketing at Salesforce. “Seeing how people can engage with others without having to go to a classroom was an inspiration.”

There is a community aspect to Trailhead Live, as users will able to see who else is taking the class with them, Lande said. It’s also more personalized, as the instructor verbally welcomes each participant by name.

Like Peloton, which features certified trainers, Trailhead Live will feature experts in different topic areas from the Salesforce community. If you miss a class or need more time to complete different skills tests, each class will also be available online. If there are 15 people taking an hour-long course on how to create Lightning Web Components, the instructor will give a set period of time for users to complete tasks in their own virtual workspace. The user can return and learn in an on-demand review of the course if he or she needs to finish any parts of it for certification.

Earlier this year, there were 1.2 million people using the Trailhead platform, according to Salesforce. That number has grown to 1.7 million and is expected to grow to 1.8 million by Dreamforce, with a total of 17 million badges earned since its launch. Trailhead users earn badges each time they show mastery of specific skills.

New Salesforce Trailhead trainings introduced this past year include cybersecurity and Apple iOS.

Go to Original Article

SAP Embrace gets new love from SAP-Microsoft partnership

SAP and Microsoft are making their cloud relationship almost exclusive with a new program.

SAP Embrace was announced in May as a program to help SAP customers move workloads to public cloud hyperscalers Microsoft Azure, AWS and Google Cloud Platform (GCP). Earlier this month, SAP and Microsoft announced a new development in the program with a three-year agreement to use Microsoft Azure as the preferred hyperscaler infrastructure provider for SAP systems. The deal is intended to address SAP’s issues in moving customers both to the cloud and migration to S/4HANA by providing a simpler, more cost-effective and risk-mitigated path.

SAP Embrace is intended to provide SAP customers a path to the cloud on Microsoft Azure infrastructure, according to the companies. SAP and Microsoft, along with systems integrators, including Deloitte, Accenture and IBM, will offer SAP customers bundles of cloud services, including unified reference architectures, road maps and market-specific information to help mitigate costs and risks of moving to the cloud. Microsoft field sales teams will sell the SAP Embrace bundles directly to customers and Microsoft will also embed and resell components of SAP Cloud Platform in Azure.

SAP worked with Microsoft, AWS and Google to develop the initial phase of SAP Embrace, but subsequent development over the summer led to the partnership agreement with Microsoft to use Azure as its preferred public cloud provider, said David Robinson, SAP senior vice president and managing director of the cloud business group.

SAP customers are in large measure satisfied that any of the three public cloud providers can handle SAP HANA database workloads and run HANA-based applications, Robinson said, but they are looking for a clear and simple path to the cloud, which was the main goal of SAP Embrace.  

“[Customers] would like to understand that, as they migrate to S/4HANA in conjunction with the lift to the cloud, they can follow a path that leads to the intelligent enterprise and will give the most cost-effective and risk managed journey,” he said.

SAP customers lean toward Azure

David Robinson, senior vice president and managing director of cloud business group, SAPDavid Robinson

The SAP-Microsoft partnership came about mainly because the majority of customers that SAP worked with to validate the SAP Embrace model were already leaning toward Azure, Robinson said.

This was primarily because Microsoft had demonstrated that Azure provided a consistent enterprise degree of engagement and support beyond just the compute network and data store, such as support services and lifecycle management, according to Robinson.

“Microsoft understands the enterprise to speak the enterprise language, and has processes wrapped around their compute network and storage around Azure that are more aligned with what SAP customers need to be able to consume and drive their S/4HANA environment,” he said.

SAP and Microsoft relationship may get cozier

It’s unusual that SAP would go with something that’s relatively exclusive, said Joshua Greenbaum, principal at Enterprise Applications Consulting, but it may be a sign of more to come from SAP and Microsoft.

Joshua Greenbaum, principal, Enterprise Applications ConsultingJoshua Greenbaum

“We know that Microsoft’s SuccessFactors implementation runs on Azure and they’re moving their ERP to Azure, so we know Microsoft wants as many workloads as it can get on Azure and they’re willing to incent SAP to do it,” Greenbaum said. “But I think there’s more to this. There will be another component of this deal coming, because I’m pretty sure that the numbers don’t add up for just this much exclusivity.”

Although the Microsoft Azure deal is not exclusive, the other two hyperscalers were not pleased at the recent addendum to SAP Embrace, Greenbaum said.

He pointed out that Robert Enslin, Google president of cloud sales and former SAP executive, and Thomas Kurian, Google Cloud CEO, are likely tapping their considerable experience to develop enterprise applications.

“It’s pretty clear that they’re going for an apps play that can compete with SAP,” Greenbaum said.

For AWS, on the other hand, Amazon’s relentless expansion into virtually every business may give potential customers pause before they entrust their systems to AWS.

“On the Amazon side there’s a lot of customers — retail, logistics, you name it — where Amazon the mothership is encroaching into a lot of core business areas, so a lot of folks are getting nervous about putting their enterprise software on the Amazon cloud,” he said. “In a way, Amazon sort of backed itself into this position.”

Other public cloud options still available

SAP customers still have the option to deploy S/4HANA and SAP HANA-based applications in any public cloud provider they want, Robinson said.

AWS recently announced new memory upgrades to the EC2 cloud infrastructure designed to manage S/4HANA sized workloads. 

“If a customer still wants to run it on AWS or Google, they can still do it; the support is the same and we continue to certify these workloads on AWS on GCP,” Robinson said. “The difference now is not the support or the ability to run, certify and upgrade — we will always certify that these infrastructures perform on database workloads as we design. But with Microsoft, we’re adding an additional degree of abstraction on top of that around the harmonization of the cloud platform services.”

Go to Original Article