Tag Archives: Information

BigID: New privacy regulations have ended ‘the data party’

The ‘data party’ era of enterprises indiscriminating, collecting, storing and selling users’ personal information is coming to an end, according to BigID.

A New York-based startup, BigID was formed in 2015 with the goal of improving enterprise data management and protection in the age of GDPR and the California Consumer Privacy Act (CCPA). The company, which won the 2018 Innovation Sandbox Contest at RSA Conference, recently raised $50 million in Series C funding. Now BigID is expanding its mission to help enterprises better understand and control their data amid new privacy regulations.

BigID co-founder and chief product officer Nimrod Vax talks with SearchSecurity about how new regulations have effectively ended the data party. He also discusses BigID’s launch, its future and whether data protection is getting easier or harder.

Editor’s note: This interview has been edited for length and clarity.

How was BigID founded?

Nimrod Vax: Dimitri [Sirota, CEO] and I were the company’s two founders. At my last kind-of real job I was head of the identity product line at CA, and at the time CA acquired Dimitri’s company, Layer 7 Technologies. That’s how we met, so we got to work together on challenges of customers around identity management and security. After we left CA, at the time, there was a big surge of breaches of personal information through incidents like the Ashley Madison scandal and LinkedIn and Twitter. And what was really surprising about those breaches was that they were breaches of what you would think is very sensitive information. It wasn’t nuclear plans or anything; it was really just lists of names and addresses and phone numbers, but it was millions and billions of them. The following year, there were four billion personal records stolen. And the question that we asked ourselves was that with all of these security tools that are out there, why are these breaches still happening? And we learned that data protection tools that were available at the time and even today were not purposely built to protect and discover and manage personal information. They were really very generic and were not built for that. And also, these scandals kind of raised visibility and awareness of privacy. The legislation has picked up and we have GDPR coming and later CCPA, so we’ve identified the opportunity to help software organizations address those needs and meet the requirements of these regulations.

What does BigID do?

Vax: BigID’s aim is to help organizations better understand what data they store about their customers and in general, and then allow them to take action on top of that and comply with regulations and better protect the data and better manage it to get more value out of the data. In order to do that, BigID is able to connect all data sources. We have over 60 different connectors to all the things you could even think about that you may have in an IT organization. All of the relational databases, all of the unstructured data sources, semistructured data, big data repositories, anything in AWS, business applications like SAP, Salesforce, Workspace, you name it. We connect to anything, and then search for and classify the data. We first and foremost catalog everything so you have a full catalog of all the data that you have. We classify that data, and tell you what type of data that is — where do you have user IDs? Where do you have phone numbers? We help to cluster it, so we can find similar types of data without knowing anything about the data; just knowing the content that’s similar to other data that helps cluster it. Our claim to fame is our ability to correlate it. We can find Social Security numbers whose Social Security number it is and that allows you to distinguish between customer data, American data, European resident data, children or adult information, and also being able to know who’s data it is for access rights and who to notify regarding a breach.

The solution is specifically built on premises, but it’s a modern enterprise software. It’s completely containerized and documented for containers. It automatically scales up and down and doesn’t require any agents on the endpoint; it connects using open APIs, and we don’t copy the data — we just house the data and that’s important because we don’t want to create a security problem. We also don’t want to incur a lot of additional storage.

And lastly, and I think this is very important, the discovery layer is all exposed to a well-documented set of APIs so that you can query that information and make it accessible to applications, and we build applications on top of that.

We’re obviously generating more and more user data every single day. Does data protection and data governance become exponentially harder as time goes on? And if so, how do you keep up with that explosion of user data?

Vax: One of the problems that led to BigID was the fact that organizations now have the knowledge and technology that allow them to store unlimited amounts of data. If you look at big data repositories, it’s all about storing truckloads of data; organizations are collecting as much as they can and they’re never deleting the data. That is a big challenge for them, not only to protect the data but even to gain value from the data. Information flows into the organization through so many different channels — from applications, from websites and from partners. Different business units are collecting data and they’re not consolidating it, so all the goodness of the ability to process all that data comes with a burden. How do I make more use of that data? How do I consolidate the data? How do I gain visibility into the data I own and have access to? That complexity requires a different approach to data discovery and data management, and that approach first requires you to be big data native; you need to be able to run in those big data repositories natively and not have to stream the data outside like the old legacy tools; you need to be able to scan data at the source, at the ingestion point, as data flows into these warehouses. What we recently introduced [with Data Pipeline Discovery] is the ability to scan data streams in services like Kafka or [AWS] Kinesis so as the data flows into those data lakes, we’re able to classify that data and understand it.

Regarding the CCPA, how much impact do you think it will have on how enterprise data is governed?

Nobody wants to be on the board of shame of the CCPA.
Nimrod VaxCo-founder, BigID

Vax: We’re seeing that effect already, and it goes back to the data party that’s been happening in the past five years. There’s been a party of data where organizations have collected as much data as they wanted without any liabilities or without any guardrails around them. Now with the CCPA and GDPR, they are bringing that additional layer of governing. You can still collect as much information as you want, but you need to protect it. You have obligations to the people from whom you are collecting the data, and that brings more governance to the data process. Now organizations need to be much more careful about that. The organization needs to have more visibility into the data not because it’s good to have it but because we have to have it for the regulations; you can’t protect, you can’t govern, and you can’t control what you don’t know, so that’s the big shift in the approach that CCPA brings to the table. Organizations are already getting prepared for that. We’re already seeing the effect that organizations are taking it very seriously and they don’t want to be the first ones to be dinged by the regulation. It’s not even the financial impact. It’s more reputational impact they are concerned about; nobody wants to be on the board of shame of the CCPA. They want to send a message to their customers that they care about privacy — not that they’re careless about it. I think that’s the big impact that we’re seeing.

What do the next 12 months look like for the company?

Vax: We’re growing rapidly both in product and in staff and in general — I think we’re about 150 people now. Last year, I think we were less than 30. We’re continuing to grow, and that growth is in two areas: on the product side and on extending to additional audiences. We are continuing to invest in our core discovery capabilities. We’re also building more apps. We’re going to solve more difficult problems in privacy and security and governance. We’re also extending to new audiences. Today, we are primarily focusing on building solutions or offerings for developers so that they can leverage our API and building process. For the next area, we are focusing on putting built-in privacy into the applications seamlessly with zero friction.

Go to Original Article
Author:

CMS takes Blue Button 2.0 API offline due to coding error

A bug in the Blue Button 2.0 API codebase has potentially exposed the protected health information of 10,000 beneficiaries and caused the Centers for Medicare & Medicaid Services to pull the service offline.

Blue Button 2.0 is a standards-based API that gives Medicare beneficiaries the ability to connect their claims data to apps and services they trust.

In a blog post, CMS said a third-party application partner reported a data anomaly with the Blue Button 2.0 API on Dec. 4. CMS verified the anomaly and immediately suspended API access. The bug could cause beneficiary PHI to be shared with another beneficiary, or the wrong Blue Button 2.0 application, according to the post.

CMS said access to the API will remain closed while the agency conducts a full review, and restoration of the service is pending. The agency has not detected intrusion by unauthorized users or an outside source.

The incident is playing out against a backdrop of federal regulators like CMS pushing for healthcare organizations to use APIs that would give patients greater access to their health data. Yet a concern among healthcare CIOs is that the drive toward interoperability is ahead of app developers’ technical ability to safely facilitate that sharing of health data, said Clyde Hewitt, executive advisor for healthcare cybersecurity firm CynergisTek Inc., in Austin, Texas.

CynergisTek executive advisor Clyde HewittClyde Hewitt

“There is a massive push for data interoperability, and organizations that spend a lot of time looking at the security and privacy issues around this realize that the need to share data is probably outrunning the technical savvy of the developers to get solid interface specification,” Hewitt said.

The issue

Medicare beneficiaries authorize third-party apps to use their Medicare claims data through Blue Button 2.0, and the Blue Button 2.0 system verifies users through a CMS identity management system. The identity management system uses a code to provide randomly generated, unique user IDs, which Blue Button 2.0 uses to identify each beneficiary.

The data anomaly was “truncating” user IDs from a 128-bit user ID to a 96-bit user ID, which was too short to be sufficiently random to “uniquely identify a single user,” according to the blog post. As a result, Blue Button 2.0 began assigning the same user IDs to different beneficiaries.

The root cause of the problem is unclear. CMS said the code causing the bug was implemented Jan. 11, 2018 and that a comprehensive review of the code was not completed at the time, which may have identified the coding error.

CMS also said the identity management system code was not tested, stating that “assumptions were made” by the Blue Button 2.0 team that the identity management system code worked but was not validated.

The coding error should be a warning to healthcare organizations as they march toward interoperability and the use of APIs, according to Hewitt. They should, for example, put greater emphasis on regression testing, which is used to make sure a recent code change hasn’t negatively impacted existing software. CMS failed to do just that.

“You can’t make changes to your system without looking at how it’s going to impact other systems,” Hewitt said. “As this spider web continues to grow, doing an end-to-end test becomes more and more complicated.”

What CMS is doing now

The Blue Button 2.0 team has implemented a new review and validation process to make sure coding errors are caught before being implemented within Blue Button 2.0 or other CMS APIs, according to the blog post.

The team is also adding additional monitoring and alerting for Blue Button 2.0, and CMS is updating Blue Button 2.0 code to store full user IDs instead of shortened versions, meaning all users will be asked to re-authenticate with Blue Button 2.0 so the system can generate new user IDs.

Fewer than 10,000 beneficiaries and 30 apps were affected by the issue, CMS said, and it was contained to Blue Button 2.0 users and developers. The issue didn’t affect Medicare beneficiaries who do not use the API.

Before bringing the API back online, CMS said the Blue Button 2.0 team will be adding additional auditing layers at the API database level, as well as the API level to give more details into user activity and provide greater traceability to actions the API takes. Monitoring and alerting capabilities are also being enhanced to notify CMS of unexpected changes in data.

Constellation Research vice president and principal analyst David ChouDavid Chou

David Chou, vice president and principal analyst at Constellation Research in Cupertino, Calif., said while the PHI exposure from this incident may not be as damaging as in other incidents, if CMS discovers more security issues after it conducts its review, it will cause alarm in the industry.

“This is a learning experience and I am optimistic that CMS will get past this with a new and improved Blue Button,” he said.

Yet Chou believes the Blue Button 2.0 initiative has been a good thing overall, and said CMS should be recognized for their effort to improve interoperability in healthcare.

Go to Original Article
Author:

Qualtrics XM adds mobile, AI, information governance

Qualtrics XM added AI and information governance tools to its customer and employee experience measurement platform this week and gave its year-old mobile app an infusion of dashboards to put data into the hands of front-line workers on the go.

In some ways, the new features reflect the influence of SAP, which acquired Qualtrics for $8 billion a year ago. The new features, such as mobile dashboarding, likely reflect a step toward making Qualtrics data relevant and available to customer-facing employees who use other SAP applications, in addition to marketing and research teams, Constellation Research principal analyst Nicole France said.

Getting such data into the hands of front-line employees makes the data more likely to be effectively used.

“Simply making these tools more widely available gets people more used to seeing this type of information, and it changes behaviors,” France said, adding that new features like mobile dashboards subtly get more people involved in using real-time performance metrics. “It’s doing it in almost a subliminal way, rather than trying to make it a quick-change program.” 

A number of Qualtrics competitors have also slowly added mobile dashboarding so employees can monitor reaction to a product, customer service or employee initiatives. But they’re all trying to find the right balance, lest it degrades employee experience or causes knee-jerk reactions to real-time fluctuations in customer response, Forrester Research senior analyst Faith Adams said

Qualtrics XM mobile NPS dashboard
Qualtrics XM mobile-app upgrades include dashboards to convey real-time customer response data to front-line employees responsible for product and service performance.

“It can be great — but it is also one that you need to be really careful with, too,” Adams said. “Some firms have noted that when they adopt mobile, it sometimes sets an expectation to employees of all levels that they are ‘always on.'”

Both France and Adams noted that the mobile app will help sales teams keep more plugged in to customer sentiment in their territories by getting data to them more quickly.

BMW, an early adopter of the new mobile app, uses it in dealerships to keep salespeople apprised of how individual customers feel about the purchasing process during the sale, and to prevent sales from falling through, according to Kelly Waldher, Qualtrics executive vice president and general manager.

AI and information governance tools debut

Qualtrics XM also added Smart Conversations, an AI-assisted tool to automate customer dialog around feedback. Two other AI features comb unstructured data for insights; one graphically visualizes customer sentiment and the other more precisely measures customer sentiment.

Prior to being acquired by SAP, Qualtrics had built its own AI and machine learning tools, Waldher said, and will continue to strategically invest in it. That said, Qualtrics will likely add features based on SAP’s Leonardo AI toolbox down the road.  

“We have that opportunity to work more closely with SAP engineers to leverage Leonardo,” Waldher said. “We’re still in the early stages of trying to tap into the broader SAP AI capabilities, but we’re excited to have that stack available to us.”

Also new to Qualtrics XM is a set of information governance features, which Waldher said will enable customers to better comply with privacy rules in both the U.S. and Europe. Qualtrics users will be able to monitor who is using data, and how within their organizations.

“Chief compliance officers and those within the IT group can make sure that the tools that are being deployed across the organization have advanced security and governance capabilities,” Waldher said. “SAP’s global strength, their presence in Western Europe and beyond, has strongly reinforced the path [of building compliance tools] we were already on.”

The new features are included in most paid Qualtrics plans at no extra charge, with a few of the AI tools requiring different licensing plans to use.

Go to Original Article
Author:

New capabilities added to Alfresco Governance Services

Alfresco Software introduced new information governance capabilities this week to its Digital Business Platform through updates to Alfresco Governance Services.

The updates include new desktop synchronization, federation services and AI-assisted legal holds features.

“In the coming year, we expect many organizations to be hit with large fines as a result of not meeting regulatory standards for data privacy, e.g., the European GDPR and California’s CCPA. We introduced these capabilities to help our customers guarantee their content security and circumvent those fines,” said Tara Combs, information governance specialist at Alfresco.

Federation Services enables cross-databases search

Federation Services is a new addition to Alfresco Governance Services. Users can search, view and manage content from Alfresco and other repositories, such as network file shares, OpenText, Documentum, Microsoft SharePoint, Dropbox.

Users can also search across different databases with the application without having to migrate content. Federation Services provides one user interface for users to manage all the information resources in an organization, according to the company.

Organizations can also store content in locations outside of Alfresco platform.

Legal holds feature provides AI-assisted search for legal teams

The legal holds feature provides document search and management capabilities that help legal teams identify relevant content for litigation purposes. Alfresco’s tool now uses AI to discover relevant content and metadata, according to the company.

“AI is offered in some legal discovery software systems, and over time all these specialized vendors will leverage AI and machine learning,” said Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis. He added that the AI-powered feature of Alfresco Governance Services is one of the first such offerings from a more general information management vendor.

“It is positioned to augment the specialized vendors’ work, essentially curating and capturing relevant bodies of information for deeper analysis.”

Desktop synchronization maintains record management policies

Another new feature added to Alfresco Governance Services synchronizes content between a repository and a desktop, along with the records management policies associated with that content, according to the company.

With the desktop synchronization feature, users can expect to have the same record management policies when they access a document on their desktop computer or viewing it from the source repository, according to the company.

When evaluating a product like this in the market, Pelz-Sharpe said the most important feature a buyer should look for is usability. “AI is very powerful, but less than useless in the wrong hands. Many AI tools expect too much of the customer — usability and recognizable, preconfigured features that the customer can use with little to no training are essential.”

The new updates are available as of Dec. 3. There is no price difference between the updated version of Alfresco Governance Services and the previous version. Customers who already had a subscription can upgrade as part of their subscription, according to the company.

According to Pelz-Sharpe, Alfresco has traditionally competed against enterprise content management and business process management vendors. It has pivoted during recent years to compete more directly with PaaS competitors, offering a content- and process-centric platform upon which its customer can build their own applications. In the future, the company is likely to compete against the likes of Oracle and IBM, he said.

Go to Original Article
Author:

Atlassian CISO Adrian Ludwig shares DevOps security outlook

BOSTON — Atlassian chief information security officer and IT industry veteran Adrian Ludwig is well aware of a heightened emphasis on DevOps security among enterprises heading into 2020 and beyond, and he believes that massive consolidation between DevOps and cybersecurity toolsets is nigh.

Ludwig, who joined Atlassian in May 2018, previously worked at Nest, Macromedia, Adobe and Google’s Android, as well as the U.S. Department of Defense. Now, he supervises Atlassian’s corporate security, including its cloud platforms, and works with the company’s product development teams on security feature improvements.

Atlassian has also begun to build DevOps security features into its Agile collaboration and DevOps tools for customers who want to build their own apps with security in mind. Integrations between Jira Service Desk and Jira issue tracking tools, for example, automatically notify development teams when security issues are detected, and the roadmap for Jira Align (formerly AgileCraft) includes the ability to track code quality, privacy and security on a story and feature level.

However, according to Ludwig, the melding of DevOps and IT security tooling, along with their disciplines, must be much broader and deeper in the long run. SearchSoftwareQuality caught up with him at the Atlassian Open event here to talk about his vision for the future of DevOps security, how it will affect Atlassian, and the IT software market at large.

SearchSoftwareQuality: We’re hearing more about security by design and applications security built into the DevOps process. What might we expect to see from Atlassian along those lines?

Ludwig: As a security practitioner, probably the most alarming factoid about security — and it gets more alarming every year — is the number of open roles for security professionals. I remember hearing at one point it was a million, and somebody else was telling me that they had found 3 million. So there’s this myth that people are going to be able to solve security problems by having more people in that space.

And an area that has sort of played into that myth is around tooling for the creation of secure applications. And a huge percentage of the current security skills gap is because we’re expecting security practitioners to find those tools, integrate those tools and monitor those tools when they weren’t designed to work well together.

Adrian LudwigAdrian Ludwig

It’s currently ridiculously difficult to build software securely. Just to think about what it means in the context of Atlassian, we have to license tools from half a dozen different vendors and integrate them into our environment. We have to think about how results from those tools flow into the [issue] resolution process. How do you bind it into Jira, so you can see the tickets, so you can get it into the hands of the developer? How do you make sure that test cases associated with fixing those issues are incorporated into your development pipeline? It’s a mess.

My expectation is that the only way we’ll ever get to a point where software can be built securely is if those capabilities are incorporated directly into the tools that are used to deliver it, as opposed to being add-ons that come from third parties.

SSQ: So does that include Atlassian?

Ludwig: I think it has to.

SSQ: What would that look like?

Ludwig: One of the areas that my team has been building something like that is around the way that we monitor our security investigations. We’ve actually released some open source projects in this area, where the way that we create alerts for Splunk, which we use as our SIEM, is tied into Jira tickets and Confluence pages. When we create alerts, a Confluence page is automatically generated, and it generates Jira tickets that then flow to our analysts to follow up on them. And that’s actually tied in more broadly to our overall risk management system.

We are also working on some internal tools to make it easier for us to connect the third-party products that look for security vulnerabilities directly into Bitbucket. Every single time we do a pull request, source code analysis runs. And it’s not just a single piece of source code analysis; it’s a wide range of them. Is that particular pull request referencing any out-of-date libraries? And dependencies that need to be updated? And then those become comments that get added into the peer review process.

My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.
Adrian LudwigCISO, Atlassian

It’s not something that we’re currently making commercially available, nor do we have specific plans at this point to do that, so I’m not announcing anything. But that’s the kind of thing that we are doing. My job is to make sure that we ship the most secure software that we possibly can, and if there are commercial opportunities, which I think there are, then it seems natural that we might do those as well.

SSQ: What does that mean for the wider market as DevOps and security tools converge?

Ludwig: Over the next 10 years, there’s going to be massive consolidation in that space. That trend is one that we’ve seen other places in the security stack. For example, I came from Android. Android now has primary responsibility, as a core platform capability, for all of the security of that device. Your historical desktop operating systems? Encryption was an add-on. Sandboxing was an add-on. Monitoring for viruses was an add-on. Those are all now part of the mobile OS platform.

If you look at the antivirus vendors, you’ve seen them stagnate, and they didn’t have an off-road onto mobile. I think it’s going to be super interesting to watch a lot of the security investments made over the last 10 years, especially in developer space, and think through how that’s going to play out. I think there’s going to be consolidation there. It’s all converging, and as it converges, a lot of stuff’s going to die.

Go to Original Article
Author:

How to rebuild the SYSVOL tree using DFSR

Active Directory has a number of different components to keep track of user and resource information in an organization….

If one piece starts to fail and a recovery effort falters, it could mean it’s time for a rebuilding process.

The system volume (SYSVOL) is a shared folder found on domain controllers in an Active Directory domain that distributes the logon and policy scripts to users on the domain. Creating the first domain controller also produces SYSVOL and its initial contents. As you build domain controllers, the SYSVOL structure is created, and the contents are replicated from another domain controller. If this replication fails, it could leave the organization in a vulnerable position until it is corrected.

How the SYSVOL directory is organized

SYSVOL contains the following items:

  • group policy data;
  • logon scripts;
  • staging folders used to synchronize data and files between domain controllers; and
  • file system junctions.
domain controller shares
Figure 1: Use the Get-SmbShare cmdlet to show the SYSVOL and NETLOGON shares on an Active Directory domain controller.

The Distributed File System Replication (DFSR) service replicates SYSVOL data on Windows 2008 and above when the domain functional level is Windows 2008 and above.

SYSVOL folder contents
Figure 2. The SYSVOL folder contains four folders: domain, staging, staging areas and sysvol.

The position of SYSVOL on disk is set when you promote a server to a domain controller. The default location is C:WindowsSYSVOLsysvol, as shown in Figure 1.

For this tutorial, we will use PowerShell Core v7 preview 3, because it fixes the .NET Core bug related to displaying certain properties, such as ProtectedFromAccidentalDeletion.

SYSVOL contains a number of folders, as shown in Figure 2.

How to protect SYSVOL before trouble strikes

As the administrator in charge of Active Directory, you need to consider how you’ll protect the data in SYSVOL to protect the system in case of corruption or user error.

Windows backs up SYSVOL as part of the system state, but you should not restore from system state, as it might not result in a proper restoration of SYSVOL. If you’re working with the relative identifier master flexible server master operations holder, you definitely don’t want to restore system state and risk having multiple objects with the same security identifier. You need a file-level backup of the SYSVOL area. Don’t forget you can use Windows Server backup to protect SYSVOL on a domain controller if you can’t use your regular backup approach.

If you can’t use a backup, then login scripts can be copied to a backup folder. Keep the backup folder on the same volume so the permissions aren’t altered. You can back up group policy objects (GPOs) with PowerShell:

Import-Module GroupPolicy -SkipEditionCheck

The SkipEditionCheck parameter is required, because the GroupPolicy module hasn’t had CompatiblePSEditions in the module manifest set to include Core.

Create a folder for the backups:

New-Item -ItemType Directory -Path C: -Name GPObackup

Use the date to create a subfolder name and create the subfolder for the current backup:

$date = (Get-Date -Format ‘yyyyMMdd’).ToString()

New-Item -ItemType Directory -Path C:GPObackup -Name $date

Run the backup:

Backup-GPO -All -Path (Join-Path -Path C:GPObackup -ChildPath $date)

If you still use login scripts, rather doing everything through GPOs, the system stores your scripts in the NETLOGON share in the C:WindowsSYSVOLdomainscripts folder.

Restore the SYSVOL folder

SYSVOL replication through DFSR usually works. However, as with any system, it’s possible for something to go wrong. There are two scenarios that should be covered:

  • Loss of SYSVOL information on a single domain controller. The risk is the change that removed the data from SYSVOL has replicated across the domain.
  • Loss of SYSVOL on all domain controllers, which requires a compete rebuild.

The second case involving a complete rebuild of SYSVOL is somewhat more complicated, with the first case being a subset of the second. The following steps explain how to recover from a complete loss of SYSVOL, with added explainers to perform an authoritative replication of a lost file.

Preparing for a SYSVOL restore

To prepare to rebuild the SYSVOL tree, stop the DFSR service on all domain controllers:

Stop-Service DFSR

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure.

On the domain controller with the SYSVOL you want to fix — or the one with the data you need to replicate — disable DFSR and make the server authoritative.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

Set-ADObject -Replace @{‘msDFSR-Enabled’=$false; ‘msDFSR-options’=1}

Disable DFSR on the other domain controllers in the domain. The difference in the commands is you’re not setting the msDFSR-options property.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * |

 Set-ADObject -Replace @{‘msDFSR-Enabled’=$false}

Rebuild the SYSVOL tree data

The next step is to restore the data. You can skip this if you’re just forcing replication of lost data.

On domain controllers where you can’t perform a restore, you’ll need to rebuild the SYSVOL tree folder structure and share structure. This tutorial assumes you’ve created SYSVOL in the default location with the following folder structure:

C:WindowsSYSVOL

C:WindowsSYSVOLdomain

C:WindowsSYSVOLdomainpolicies

C:WindowsSYSVOLdomainscripts

C:WindowsSYSVOLstaging

C:WindowsSYSVOLstagingdomain

C:WindowsSYSVOLstaging areas

C:WindowsSYSVOLsysvol

You can use the following PowerShell commands to re-create the folders in the minimum number of steps. Be sure to change the nondefault location of the Stest folder used below to match your requirements.

New-Item -Path C:StestSYSVOLdomainscripts -ItemType Directory

New-Item -Path C:StestSYSVOLdomainpolicies -ItemType Directory

New-Item -Path C:StestSYSVOLstagingdomain -ItemType Directory

New-Item -Path C:StestSYSVOL’staging areas’ -ItemType Directory

New-Item -Path C:StestSYSVOLsysvol -ItemType Directory

Re-create the directory junction points. Map SYSVOLdomain (source folder) to SYSVOLSYSVOL and SYSVOLstagingdomain (source folder) to SYSVOLstaging areas.

You need to run mklink as administrator from a command prompt, rather than PowerShell:

C:Windows>mklink /J C:stestSYSVOLSYSVOLsphinx.org C:stestSYSVOLdomain

Junction created for C:stestSYSVOLSYSVOLsphinx.org <<===>> C:stestSYSVOLdomain

C:Windows>mklink /J “C:stestSYSVOLstaging areassphinx.org” C:stestsysvolStagingdomain

Junction created for C:stestSYSVOLstaging areassphinx.org <<===>> C:stestsysvolStagingdomain

Set the following permissions on the SYSVOL folder:

NT AUTHORITYAuthenticated Users                           ReadAndExecute, Synchronize

NT AUTHORITYSYSTEM                                                        FullControl

BUILTINAdministrators           Modify, ChangePermissions, TakeOwnership, Synchronize

BUILTINServer Operators                                   ReadAndExecute, Synchronize

Inheritance should be blocked.

If you don’t have a backup of the GPOs, re-create the default GPOs with the DCGPOFIX utility, and then re-create your other GPOs.

You may need to re-create the SYSVOL share (See Figure 1). Set the share permissions to the following:

Everyone: Read

Authenticated Users: Full control

Administrators group: Full control

Set the share comment (description) to Logon server share.

Check that the NETLOGON share is available. It remained available during my testing process, but you may need to re-create it. 

Share permissions for NETLOGON are the following:

Everyone: Read

Administrators: Full control

You should be able to restart replication.

How to restart Active Directory replication

Start the DFSR service and reenable DFSR on the authoritative server:

Start-Service  -Name DFSR

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC01,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run the following command to initialize SYSVOL:

DFSRDIAG POLLAD

If you don’t have the DFS management tools installed, run this command from a Windows PowerShell 5.1 console:

Install-WindowsFeature RSAT-DFS-Mgmt-Con

The ServerManager module cannot load into PowerShell Core at this time.

Start DFSR service on other domain controllers:

Start-Service -Name DFSR

Enable DFSR on the nonauthoritative domain controllers. Check that replication has occurred.

Get-ADObject -Identity “CN=SYSVOL Subscription,CN=Domain System Volume,CN=DFSR-LocalSettings,CN=TTSDC02,OU=Domain Controllers,DC=Sphinx,DC=org” -Properties * | Set-ADObject -Replace @{‘msDFSR-Enabled’=$true}

Run DFSRDIAG on the nonauthoritative domain controllers:

DFSRDIAG POLLAD

The results might not be immediate, but replication should restart, and then SYSVOL should be available.

The process to rebuilding the SYSVOL tree is not something that occurs every day. With any luck, you won’t have to do it ever, but it’s a skill worth developing to ensure you can protect and recover your Active Directory domain.

Go to Original Article
Author:

Data integration problems a hurdle companies must overcome

As organizations try to analyze the vast amounts of information they’ve collected, they need to overcome data integration problems before they can extract meaningful insights.

Decades of data exist for enterprises that have stood the test of time, and it’s often housed in different locales and spread across disparate systems.

The scope of the business intelligence that enterprises glean from it in that haphazard form is limited. Attempting to standardize the data, meanwhile, can be overwhelming.

Enter vendors that specialize in solving data integration issues, whose service is helping other companies curate the vast amounts of information they possess and put it in a place — and in a format — where it can be accessed and used to produce meaningful BI.

Cloud data integration provider Talend, along with others such as Informatica and MuleSoft, recently acquired by Salesforce, is one such vendor.

In the second part of a two-part Q&A, Talend CEO discusses different data integration problems large enterprises face compared with their small and midsize brethren, as well as Talend’s strategy in helping companies address their sudden abundance of data.

In part one, Tuchen talks about the massive challenges that have developed over the last 10 to 15 years as organizations have begun to digitize and pool their data.

Are there different data integration problems a small- to medium-sized business might face compared to a large organization in terms of extracting data from a vast pool of information it has collected over the years?

Mike Tuchen, CEO of TalendMike Tuchen

Mike Tuchen: For a small or medium-sized company, for the most part they know where their systems are. There’s a much more human understandable set of sources where you’re going to get your data from, so for the most part cataloging for them isn’t required upfront. It’s something you can choose to do later and optionally. They can say, ‘I’m going to pull data from using Salesforce and NetSuite, and HubSpot and Salesforce and NetSuite, and Zendesk for support.’ They can pull data from all those systems, make sure they have a consistent definition of who’s a customer and what they’re doing, and then can start analyzing what the most effective campaigns are, who the most likely customers to convert are, who the most likely customers to retain or upsell are, or whatever they’re trying to do with the core analytics. Since you have a small number of systems — a small number of sources — you can go directly there and it turns more into a ‘let’s drive the integration process, let’s drive the cleaning process’ and the initial cleaning process is a simpler problem.

So in essence, even though they may not have the financial wherewithal to invest in a team of data scientists, is the process of solving data integration issues actually easier for them?

Tuchen: For sure. Size creates complexity. It creates an opportunity as well, but the bigger you get the more sources. Think about at one end of the spectrum you’ve got a large multinational company that has a whole bunch of different divisions spread out across the world, some of them brought in through acquisitions. Think about the plethora of different sources you have. We’re working with a customer that has a dozen different ERP systems that they’ve done and that they’re now trying to bring data together from, and that’s just in one type of data — transactional data around financial transactions. Think about that kind of complexity versus a small company.

What is the core service Talend provides?

Tuchen: Talend is a data integration company, and our core approach is to help companies collect, govern, transform and share their data. What we’re seeing is that data, more and more, is becoming a critical strategic asset. We’re seeing, worldwide, that as companies are more and more digitized they’re seeing that data managed correctly is a competitive advantage, and at the heart of every single industry is a strategic data battle that if you solve that well there’s an advantage and you’ll be out executing your competitors. With that recognition, the importance of the problem that we’re solving is going up in our customers’ minds, and that creates an opportunity for us.

How does what Talend does help customers overcome data integration problems?

Tuchen: We have a cloud-based offering called Talend Data Fabric that includes a number of different components, including a lot of the different capabilities we talked about. There’s a data catalog that solves that discovery process and the data definition issue, making sure that we have a consistent definition, lineage of where does data start and where does it end, what happens to it along the way so you can understand impact analysis, and so on. That’s one part of our offering. And we have an [application programming interface] offering that allows you to share that with customers or partners or suppliers.

As you look at where data integration and mining are headed, what is Talend’s roadmap for the next one to three years?

Tuchen: Right now we’re doubling and tripling down on the cloud. Our cloud business is exploding. It’s growing well over 100% a year. What we’re seeing is the entire IT landscape is moving to the cloud. In particular in the data analytics, data warehouses, just over the last couple of years we’ve reached the tipping point. Now we’re at the point where cloud data warehouses are significantly better than anything you can get on premises — they’re higher performance, more flexible, more scalable, you can plug in machine learning, you can plug in real-time flows to them, there’s no upfront commitment, they’re always up to date. It’s now at the point where the benefits are so dramatic that every company in the world has either moved or is planning to move and do most of their analytical processing in the cloud. That creates an enormous opportunity for us, and one that we’re maniacally focused on. We’re putting an enormous amount of effort into maintaining and extending our leadership in cloud-based data integration and governance.

Editor’s note: This interview has been edited for clarity and conciseness.

Go to Original Article
Author:

Construct a solid Active Directory password policy

The information technology landscape offers many different methods to authenticate users, including digital certificates, one-time password tokens and biometrics.

However, there is no escaping the ubiquity of the password. The best Active Directory password policy for your organization should meet the threshold for high security and end-user satisfaction while minimizing the amount of maintenance effort.

Password needs adjust over time

Before the release of Windows Server 2008, Active Directory (AD) password policies were scoped exclusively at the domain level. The AD domain represented the fundamental security and administrative boundary within an AD forest.

The guidance at the time was to give all users within a domain the same security requirements. If a business needed more than one password policy, then your only choice was to break the forest into one or more child domains or separate domain trees.

Windows Server 2008 introduced fine-grained password policies, which allow administrators to assign different password settings objects to different AD groups. Your domain users would have one password policy while you would have different policies for domain administrators and your service accounts.

More security policies mean more administrative work

Deploying multiple password policies within a single AD domain allows you to check your compliance boxes and have additional flexibility, but there are trade-offs. First, increasing the complexity of your Active Directory password policy infrastructure results in greater administrative burden and increased troubleshooting effort.

Second, the more intricate the password policy, the unhappier your users will be. This speaks to the information security counterbalance between security strength on one side and user convenience on the other.

What makes a quality password? For the longest time, we had the following recommendations:

  • minimum length of 8 characters;
  • a mixture of uppercase and lowercase letters;
  • inclusion of at least one number;
  • inclusion of at least one non-alphanumeric character; and
  • no fragments of a username.

Ideally, the password should not correspond to any word in any dictionary to thwart dictionary-based brute force attacks. One way to develop a strong password is to create a passphrase and “salt” the passphrase with numbers and/or non-alphanumeric characters.

Ideally, the password should not correspond to any word in any dictionary to thwart dictionary-based, brute force attacks.

The key to remembering a passphrase is to make it as personal as possible. For example, take the following phrase: The hot dog vendor sold me 18 cold dogs.

That phrase may have some private meaning, which makes it nearly impossible to forget. Next, we take the first letter of each word and the numbers to obtain the following string: Thdvsm18cd.

If we switch the letter s with a dollar sign, then we’ve built a solid passphrase of Thdv$m18cd.

Striking the right balance

One piece of advice I nearly always offer to my consulting clients is to keep your infrastructure as simple as possible, but not too simple. What that means related to your Active Directory password policy is:

  • keep your domains to a minimum in your AD forest;
  • minimize your password policies while staying in compliance with your organizational/security requirements;
  • relax the password policy restrictions; and
  • encourage users to create a single passphrase that is both easy to remember but hard to guess.

Password guidelines adjust over time

Relax the password policy? Yes, that’s correct. In June 2017, the National Institute of Standards and Technology (NIST) released Special Publication 800-63B, which presented a more balanced approach between usability and security.

When you force your domain users to change their passwords regularly, they are likely to reuse some portion of their previous passwords, such as password, password1, password2, and so forth.

The new NIST guidance suggests that user passwords:

  • range between 8 and 64 characters in length;
  • have the ability to use non-alphanumerics, but do not make it a requirement;
  • prevent sequential or repeating characters;
  • prevent context-specific passwords such as user name and company name;
  • prevent commonly used passwords; and
  • prevent passwords from known public data breaches.

Boost password quality with help from tools

These are great suggestions, but they are difficult to implement with native Active Directory password policy tools. For this reason, many businesses purchase a third-party password management tool, such as Anixis Password Policy Enforcer, ManageEngine ADSelfService Plus, nFront Password Filter, Specops Password Policy, Thycotic Secret Server and Tools4ever Password Complexity Manager, to name a few.

Third-party password policy tools tap into the cloud to take advantage of public identity breach databases, lists of the most common passwords and other sources to make your domain password policy much more contemporary and organic. It’s worth considering the cost of these products when you consider the potential loss from a data breach that happened because of a weak password.

Go to Original Article
Author:

Deep Learning Indaba 2018: Strengthening African machine learning – Microsoft Research

deep leaning indaba, participants at conference

Images ©2018 Deep Learning Indaba.

At the 30th conference on Neural Information Processing in 2016, one of the world’s foremost gatherings on machine learning, there was not a single accepted paper from a researcher at an African institution. In fact, for the last decade, the entire African continent has been absent from the contemporary machine learning landscape. The following year, a group of researchers set out to change this, founding a world-class machine learning conference that would strengthen African machine learning – the Deep Learning Indaba.

The first Deep Learning Indaba took place at Wits University in South Africa. The indaba (a Zulu word for a gathering or meeting) was a runaway success, with almost 300 participants representing 22 African countries and 33 African research institutes. It was a week-long event of teaching, sharing and debate around the state of the art in machine learning and artificial intelligence that aimed to be a catalyst for strengthening machine learning in Africa.

indaba group picture

Attendees at Deep Learning Indaba 2017, held at Wits University, South Africa.

Now in its second year, Microsoft is proud to sponsor Deep Learning Indaba 2018, to be held September 9-14 at Stellenbosch University in South Africa.

The conference offers an exciting line-up of talks, hands-on workshops, poster sessions and networking/mentoring events. Once again it has attracted a star-studded guest speaker list – Google Brain lead and Tensorflow co-creator Jeff Dean; DeepMind lead Nando de Freitas; and AlphaGo lead, David Silver. Microsoft is flying in top researchers as well; Katja Hofmann will speak about reinforcement learning and Project Malmo (check out her recent podcast episode). Konstantina Palla will present on generative models and healthcare. And Timnit Gebru will talk about fairness and ethics in AI.

The missing continent

The motivation behind this conference really resonated with me. When I heard about it, I knew I wanted to contribute to the 2018 Indaba, and I was excited that Microsoft was already signed-up as a headline sponsor, and had our own Danielle Belgrave on the advisory board.

African Map - Indaba 2017 attendance

African countries represented at the 2017 Deep Learning Indaba.

Dr Tempest van Schaik, Software Engineer, AI & Data Science

Dr. Tempest van Schaik, Software Engineer, AI & Data Science

I graduated from University of the Witwatersrand (“Wits”) in Johannesburg, South Africa, with a degree in biomedical engineering, and a degree in electrical engineering, not unlike some of the conference organizers. In 2010, I came to the United Kingdom to pursue my PhD at Imperial College London and stayed on to work in the UK, joining Microsoft in 2017 as a software engineer in machine learning.

In my eight years working in the UK in the tech community, I have seldom come across African scientists, engineers and researchers sharing their work on the international stage. During my PhD studies, I was acutely aware of the Randlord monuments flanking my department’s building, despite the absence of any South Africans inside the department. At scientific conferences in Asia, Europe and the USA, I scanned the schedule for African institutions but seldom found them. Fellow Africans that I do find are usually working abroad. I have come to learn that Africa, a continent bigger than the USA, China, India, and Europe put together, has little visible global participation in science and technology. The reasons are numerous, with affordability being just one factor. I have felt the disappointment of trying to get a Tanzanian panelist to a tech conference in the USA. We realized that even if we could raise sufficient funds for his participation, the money would have achieved so much more in his home country that he couldn’t justify spending it on a conference.

Of all tech areas, perhaps it is artificial intelligence in particular that needs African participation. Countries such as China and the UK are gearing-up for the next industrial revolution, creating plans for re-retraining and increasing digital skills. Those who are left behind could face disruption due to AI and automation and might not be able to benefit from the fruits of AI. Another reason to increase African participation in AI is to reduce algorithmic bias that can arise when a narrow section of society develops technology.

A quote from the Indaba 2017 report perhaps says it best: “The solutions of contemporary AI and machine learning have been developed for the most part in the developed-world. As Africans, we continue to be receivers of the current advances in machine learning. To address the challenges facing our societies and countries, Africans must be owners, shapers and contributors of the advances in machine learning and artificial intelligence. “

Attendees at Deep Learning Indaba 2017

Attendees at Deep Learning Indaba 2017

Diversity

One of the goals of the conference is to increase diversity in the field. To quote the organizers, “It is critical for Africans, and women and black people in particular, to be appropriately represented in the advances that are to be made.” The make-up of the Indaba in its first two years is already impressive and leads by example to show how to organize a diverse and inclusive conference. From the Code of Conduct to the organizing committee, the advisory board, the speakers and attendees, you see a group of brilliant and diverse people in every sense.

Women in Machine Learning session

The 2018 Women in Machine Learning lineup.

The 2018 Women in Machine Learning lineup.

The Indaba’s quest for diversity aligns with another passion of mine, that of increasing women’s participation in STEM. Since my days of being the lonely woman in electrical engineering lectures, things have been improving. There seems to be more awareness today about attracting and retaining women in STEM, by improving workplace culture. However, there’s still a long way to go, and in the UK where I work, only 11% of the engineering workforce is female according to a 2017 survey. I have found great support and encouragement from women-in-tech communities and events such as PyLadies/RLadies London and AI Club For Gender Minorities, and saw the Indaba as an opportunity to pay it forward and link up with like-minded women globally. So, I’m very pleased to say that on the evening of September 10 at the Indaba, Microsoft is hosting a Women in Machine Learning event.

Indaba – a gathering.

Indaba – a gathering.

The aim of our evening is to encourage, support and unite women in machine learning. Our panelists each will describe her personal career journey and her experiences as a woman in machine learning. As there will be a high number of students in attendance, our panel also highlights diverse career paths, from academia to industrial research, to applied machine learning, to start-ups. Our panel consists of Sarah Brown (Brown University, USA), Konstantina Palla (Microsoft Research, UK), Muthoni Wanyoike (InstaDeep, Kenya), Kathleen Siminyu (Africa’s Talking, Kenya) and myself from Microsoft Commercial Software Engineering (UK). We look forward to seeing you there!

SIEM evaluation criteria: Choosing the right SIEM products

Security information and event management products and services collect, analyze and report on security log data from a large number of enterprise security controls, host operating systems, enterprise applications and other software used by an organization. Some SIEMs also attempt to stop attacks in progress that they detect, potentially preventing compromises or limiting the damage that successful compromises could cause.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

Because light SIEM products offer few capabilities and are much easier to evaluate, they are out of the scope of this article. Instead, this feature points out the capabilities of regular SIEMs and can serve as a guide for creating SIEM evaluation criteria, which merit particularly close attention compared to other security technologies.

It can be quite a challenge to figure out which products to evaluate, let alone to choose the one that’s best for a particular organization or team. Part of the evaluation process involves creating a list of SIEM evaluation criteria potential buyers can use to highlight important capabilities.

1. How much native support does the SIEM provide for relevant log sources?

A SIEM’s value is diminished if it cannot receive and understand log data from all of the log-generating sources in the organization. Most obvious is the organization’s enterprise security controls, such as firewalls, virtual private networks, intrusion prevention systems, email and web security gateways, and antimalware products.

It is reasonable to expect a SIEM to natively understand log files created by any major product or cloud-based service in these categories. If the tool does not, it should have no role in your security operations.

There are many SIEM systems available today, including light SIEM products designed for organizations that cannot afford or do not feel they need a fully featured SIEM added to their current security operations.

In addition, a SIEM should provide native support for log files from the organization’s operating systems. An exception is mobile device operating systems, which often do not provide any security logging capabilities.

SIEMs should also natively support the organization’s major database platforms, as well as any enterprise applications that enable users to interact with sensitive data. Native SIEM support for other software is generally nice to have, but it is not mandatory.

If a SIEM does not natively support a log source, then the organization can either develop customized code to provide the necessary support or use the SIEM without the log source’s data.

2. Can the SIEM supplement existing logging capabilities?

An organization’s particular applications and software may lack robust logging capabilities. Some SIEM systems and services can supplement these by performing their own monitoring in addition to their regular job of log management.

In essence, this extends the SIEM from being strictly a centralized log collection, analysis and reporting tool to also generating raw log data on behalf of other hosts.

3. How effectively can the SIEM make use of threat intelligence?

Most SIEMs are capable of ingesting threat intelligence feeds. These feeds, which are often acquired from separate subscriptions, contain up-to-date information on threat activity observed all over the world, including which hosts are being used to stage or launch attacks and what the characteristics of these attacks are. The greatest value in using these feeds is enabling the SIEM to identify attacks more accurately and to make more informed decisions, often automatically, about which attacks need to be stopped and what the best method is to stop them.

Of course, the quality of threat intelligence varies between vendors. Factors to consider when evaluating threat intelligence should include how often the threat intelligence updates and how the threat intelligence vendor indicates its confidence in the malicious nature of each threat.

4. What forensic capabilities can SIEM products provide?

Forensics capabilities are an evolving SIEM evaluation criteria. Traditionally, SIEMs have only collected data provided by other log sources.

However, recently some SIEM systems have added various forensic capabilities that can collect their own data regarding suspicious activity. A common example is the ability to do full packet captures for a network connection associated with malicious activity. Assuming that these packets are unencrypted, a SIEM analyst can then review their contents more closely to better understand the nature of the packets.

Another aspect of forensics is host activity logging; the SIEM product can perform such logging at all times, or the logging could be triggered when the SIEM tool suspects suspicious activity involving a particular host.

5. What features do SIEM products provide to assist with performing data analysis?

SIEM products that are used for incident detection and handling should provide features that help users to review and analyze the log data for themselves, as well as the SIEM’s own alerts and other findings. One reason for this is that even a highly accurate SIEM will occasionally misinterpret events and generate false positives, so people need to have a way to validate the SIEM’s results.

Another reason for this is that the users involved in security analytics need helpful interfaces to facilitate their investigations. Examples of such interfaces include sophisticated search capabilities and data visualization capabilities.

6. How timely, secure and effective are the SIEM’s automated response capabilities?

Another SIEM evaluation criteria is the product’s automated response capabilities. This is often an organization-specific endeavor because it is highly dependent on the organization’s network architecture, network security controls and other aspects of security management.

For example, a particular SIEM product may not have the ability to direct an organization’s firewall or other network security controls to terminate a malicious connection.

Besides ensuring the SIEM product can communicate its needs to the organization’s other major security controls, it is also important to consider the following characteristics:

  • How long does it take the SIEM to detect an attack and direct the appropriate security controls to stop it?
  • How are the communications between the SIEM and the other security controls protected so as to prevent eavesdropping and alteration?
  • How effective is the SIEM product at stopping attacks before damage occurs?

7. Which security compliance initiatives does the SIEM support with built-in reporting?

Most SIEMs offer highly customizable reporting capabilities. Many of these products also offer built-in support to generate reports that meet the requirements of various security compliance initiatives. Each organization should identify which initiatives are applicable and then ensure that the SIEM product supports as many of these initiatives as possible.

For any initiatives that the SIEM does not support, make sure that the SIEM product supports the proper customizable reporting options to meet your requirements.

Do your homework and evaluate

SIEMs are complex technologies that require extensive integration with enterprise security controls and numerous hosts throughout an organization. To evaluate which tool is best for your organization, it may be helpful to define basic SIEM evaluation criteria. There is not a single SIEM product that is the best system for all organizations; every environment has its own combination of IT characteristics and security needs.

Even the main reason for having a SIEM, such as meeting compliance reporting requirements or aiding in incident detection and handling, may vary widely between organizations. Therefore, each organization should do its own evaluation before acquiring a SIEM product or service. Examine the offerings from several SIEM vendors before even considering deployment.

This article presents several SIEM evaluation criteria that organizations should consider, but other criteria may also be necessary. Think of these as a starting point for the organization to customize and build upon to develop its own list of SIEM evaluation criteria. This will help ensure the organization chooses the best possible SIEM product.