Tag Archives: According

FBI hacking may have crossed international borders

An FBI hacking operation may have crossed international borders, according to new court documents, but experts are unsure what the consequences may be.

The details come as part of court filings in the appeal of David Tippens in connection to the Playpen dark web child pornography website. During the investigation, the FBI reportedly seized the Playpen server but kept it running for 13 days. During this time an FBI hacking operation deployed a network investigative technique, malware designed by the FBI to gather information on users of the website.

According to the court filing first obtained by The Daily Beast, the FBI hacking operation “ultimately seized 8,713 IP addresses and other identifying data from computers located throughout the United States and in 120 other countries, including Russia, Iran and China, as well as data from an entity the government described as ‘a satellite provider.'”

Experts like Philip Lieberman, president of Los Angeles-based Lieberman Software, told SearchSecurity this was an especially tricky scenario because “the FBI is not authorized for foreign operations” according to its federal mandate.

Nicholas Weaver, computer security researcher at the International Computer Science Institute in Berkeley, Calif., said the anonymity of the dark web makes this FBI hacking case more complicated.

Robert Cattanach, partner at Dorsey and Whitney LLP, based in Minneapolis, agreed that the FBI might not “know the physical location of the computer until it accesses the computer,” but said international cooperation in these investigations might be possible even with countries like Russia, China and Iran.

“In an area like child pornography one would not expect a lot of friction, even from these kinds of countries, but in the delicate world of international relations the best result one could often hope for is for the foreign government to ‘look the other way’ as we accessed computers to build a case against a U.S. resident,” Cattanach told SearchSecurity. “Of course, we might also be willing to ‘look the other way’ in similar situations. If a foreign power is pressing the envelope to gain access to national security or confidential business information however that would be an entirely different story.”

Network redundancy design does not always equal resiliency

Network redundancy design isn’t everything, according to Ivan Pepelnjak, who tackles the subject of whether redundancy equals resiliency in an IPSpace post. His conclusion: Full redundancy doesn’t necessarily result in greater resiliency, but network redundancy design can help decrease the probability of a failure occurring.

Many companies have adopted site reliability engineers, a term that Pepelnjak suggests is becoming watered down. In some cases, these engineers sometimes trigger unanticipated failures — either manually or automatically — through mistaken actions intended to shore up redundancy. What’s more, statistics suggest that added redundancy decreases availability during “gray failure” events, when components’ performance may only be subtly degraded.

“In reality, we keep heaping layers of leaky abstractions and ever-more-convoluted kludges on top of each other until the whole thing comes crashing down resulting in days of downtime,” Pepelnjak said, adding that Vint Cerf may have said it best in a recent article, when he wrote that, when it comes to network redundancy design, “We’re facing a brittle and fragile future.”

Read more of Pepelnjak’s thoughts on network redundancy design. 

WLAN design with iBwave

Lee Badman, blogging in Wirednot, shared his assessment of the new IBwave R9 software for WLAN design. Badman identified pre-existing features with earlier versions of the software that he liked, among them 3D modeling of the WLAN environment, modeling for inclined surfaces and synchronization with the cloud for survey projects. The software package also included a mobile app and a viewer for customers to gain insight on the design team’s viewpoint, without requiring the purchase of the IBwave software.

In the new version of the IBwave design suite, the software offers an improved user interface and the ability to institute coverage exclusion zones, as well as interoperating with software-defined radios. Badman praised the software’s inclusion of smart antenna contouring, which allows users to manipulate simulated access points to determine signal strength once a floor plan is known. Additionally, IBwave includes auto cable routing, a feature that maps cables virtually after a cable tray and router location are placed.

Dig deeper into Badman’s thoughts on IBwave for WLAN design.

Adding a secure management plane in the cloud

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., sees many cases of cybersecurity professionals installing management servers on their networks to avoid disruptive change. “Given the history of cybersecurity, this behavior is certainly understandable — I control what happens on my own network but have almost no oversight what takes place on Amazon Web Services, Azure, or Google Cloud Platform. Yup, there’s a lot of history and dogma here, but I believe it’s time for CISOs to reconsider,” he said.

Oltsik recommends a secure cloud-based management plane because of reduced costs, more rapid product upgrades and more rapid evolution and rollout of products. He also sees security operations and analytics platform architecture being deployed more rapidly through cloud-based management planes. To gain control, Oltsik recommends that buyers request standard documented APIs from vendors so that users have a say over when and how much data to ingest. “The benefits of moving to a cloud-based security management model speak for themselves. Given this, old school CISOs should think long and hard about maintaining the status quo,” Oltsik added.

Explore more of Oltsik’s thoughts on a secure cloud-based management plane.

How to avoid common challenges when migrating to microservices

SAN JOSE, Calif. — Migrating to microservices calls for a significant technology transformation, according to speakers at this week’s API World 2017. In this article, three microservices experts — Irakli Nadareishvili, Eric Roch and Chris Tozzi — expose and offer advice about two common technical microservices challenges.

“Understanding how to build a microservice architecture can be painful, just like any other significant transformation can,” said Nadareishvili, senior director of technology at Capital One, based in McLean, Va., in a preconference interview. He elaborated during his API World 2017 session, “Implementing microservices at Capital One.”

Nadareishvilli, Roch and Tozzi offer alternatives for the two most common technical mistakes in migrating to microservices: not properly decoupling a distributed system and not resizing microservices. “Overcoming microservices challenges” was the title of Roch’s API World 2017 session. He is CTO of enterprise architecture for cloud provider Perficient, based in St. Louis. Tozzi is a DevOps analyst for Fixate IO in Livermore, Calif.

Technical challenge No. 1: Not loosely coupling

Irakli Nadareishvili, Capital One senior director of technologyIrakli Nadareishvili

Rather than building a microservices architecture, Nadareishvili has seen companies creating a distributed monolith, in which the organization can’t change one service without affecting another. This goes against the grain of the microservices architecture, which should reduce coordination between systems.

In this worst-case scenario, Tozzi noted, the organization must run and manage applications that are deployed as distinct services, but in which those services remain interdependent. “For example, you might have a front-end service that only works if it can connect to a database that is configured in a certain way,” he explained. That database, in turn, expects the front-end service to be configured in a particular way. “If you’re not distributing apps into independent services, you might as well still be running a monolith.”

Chris Tozzi, DevOps analyst for Fixate IOChris Tozzi

Nadareishvili recommended using Ben Christensen’s Rule of Twos to avoid coupling in certain scenarios as one way to avoid creating distributed monoliths. This practice calls for making sure the infrastructure supports at least two alternatives for a critical component.

Always think “loose coupling” when migrating to microservices, and don’t cling to the SOA model, Tozzi said. “The best way to avoid the distributed monolith anti-pattern is to ensure that each microservice in your application can start and run independently of others,” he said. “The configuration for each service should also be self-contained and able to be modified without affecting other services or apps.”

Microservices challenge No. 2: Right-sizing

In his work with DevOps teams, Roch said he sees them struggling with how to right-size microservices. They’re often not sure of how micro a microservice should be.

One option, Roch said, is using the domain-driven design (DDD) bounded contexts as the sizing guide. There are, however, two major problems with this approach:

  • Sizing microservices with DDD only works at early stages. This means the perfect size of a microservice doesn’t exist. The right size of a microservice is a function of time; the size decreases over time. “As teams mature in their practice of microservices and the understanding of their problem domain, larger microservices usually split and overall granularity increases,” Roch explained.
  • Even at the early stages, using formal DDD is hard. It requires some expertise and experience. “I’ve met many more teams that were aware of DDD than teams that were actually practicing it,” Roch said. “And I believe the reason is complexity of formal [and] conventional DDD analysis.”

If you’re not distributing apps into independent services, you might as well still be running a monolith.
Chris TozziDevOps analyst, Fixate IO

Roch recommended using bounded contexts for the initial sizing of microservices. Beyond that, he said, use a group modeling technique, event storming, for discovering them instead of more traditional DDD approaches. “I am a big fan of event storming both for microservices, as well as in general for understanding a domain,” Roch said. He recommended it for any style of architecture. “It brings unparalleled understanding of the problem domain shared between product owners and tech teams.”

One tricky part of using event storming is the final artifact is impossible to capture in any reasonable way. The final artifact, Roch said, is very long strips of paper with hundreds of “stickies” on them. When his team goes through an event-storming exercise, he uses the Microservices Design Canvas from API Academy for capturing the final design. This tool is used to identify the desired attributes of a service before it is developed. “These two tools in combination work really well for us,” he said.

Microservices challenges bottom line: Reduce coordination

Always pay attention to what causes coordination in your system, the experts said. Finding and removing those couplings will speed the migration to microservices. “Microservice architecture is primarily about reducing coordination needs,” Roch concluded. “Everything else is largely secondary.”

Calling ‘all aboard’ on the six-month Java release train

For more than 20 years, according to Mark Reinhold, “the Java SE Platform and the JDK have evolved in large, irregular, and somewhat unpredictable steps.” But in a blog post on Sept. 6, Reinhold, chief architect of the Java platform group at Oracle, proposed changes to the way future Java Development Kit releases will happen.

Historically, every new Java release has been feature-driven. Each release proposes milestones and Java release dates, but those dates tend to get pushed back if there are issues incorporating certain features. Security issues and the incorporation of Project Lambda delayed the version 8 release, and ongoing disagreements about Project Jigsaw will mean a delay in the version 9 Java release.

Java release cynicism

Given a history of having release dates slip by, some sarcastically said the most impressive feature of Java 7 was the fact it was released on time. With features driving the release, dates become somewhat malleable. But all of that is about to change.

“Taking inspiration from the release models used by other platforms and by various operating-system distributions, I propose that after Java 9 we adopt a strict, time-based model with a new feature release every six months, update releases every quarter, and a long-term support release every three years,” Reinhold wrote is his Moving Java Forward Faster blog.

The Java release train

I think the biggest part of the challenge is sticking to the plan.
Gil TeneCTO at Azul Systems

While this approach is new to Java, it’s a fairly common release management system in other areas of the software development world. Known as a release train, the idea is the release train always leaves on time, regardless of whether or not a planned feature is onboard. If the feature makes it, that’s great. If not, it will have to wait for the next train in another six months. So the release always leaves on time, whether or not planned functionality is included.

So how feasible is regularly scheduling a Java release train? “I do think there’s a challenge here,” said Gil Tene, CTO at Azul Systems Inc., based in Sunnyvale, Calif. “But I think the biggest part of the challenge is sticking to the plan. It would require the discipline to say, ‘We are not holding back a release for any feature.’ Then, you can stick to it.”

The next long-term support Java release?

As a whole, the community seems to be receiving the proposed changes to the Java release cycle positively, but there are concerns — one of which pertains to whether or not Java 9 will be a long-term support release. That puts organizations looking to go into production with a Java 9 JDK in a bit of a predicament.

To learn more about the announcement, along with some of the potential challenges organizations might face if the Java release train becomes the new normal, listen to the accompanying podcast in which Cameron McKenzie speaks with Azul’s Gil Tene about the effect a six-month Java release cycle will have on organizations.

For Sale – Seagate external 500gb hard drive

Seagate External Hard Drive 500gb
Very good condition with no errors according to my iMac!

Open to sensible offers as I have no need for it!

Thanks

Price and currency: 35
Delivery: Delivery cost is included within my country
Payment method: BACS PPG
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Cybersecurity machine learning moves ahead with vendor push

Cybersecurity machine learning is growing in popularity, according to Jon Oltsik, an analyst with Enterprise Strategy Group Inc. in Milford, Mass. Oltsik attended the recent Black Hat conference, where technology vendors were abuzz with talk of cybersecurity machine learning.

ESG research asked 412 respondents about their understanding of artificial intelligence (AI) and cybersecurity machine learning, which revealed that only 30% said they were very knowledgeable on the subject. Only 12% of respondents said their organizations had deployed these systems widely.

According to Olstik, the cybersecurity industry sees an opportunity, because only 6% of respondents in surveys said their organizations were not considering AI or machine learning deployments. He said companies will need to educate the market, identify use cases, work with existing technologies and provide good support.

“I find machine learning [and] AI technology extremely cool but no one is buying technology for technology sake. The best tools will help CISOs improve security efficacy, operational efficiency, and business enablement,” Oltsik wrote.

Read more of Oltsik’s thoughts on cybersecurity machine learning.

Microsoft leverages Kubernetes backing for containers

Microsoft is positioning itself to fight back against the success of Amazon Web Services, according to Charlotte Dunlap, an analyst with Current Analysis in Sterling, Va.

The company launched a new container service and joined the Cloud Native Computing Foundation (CNCF) amidst earnings reports indicating that its Azure platform is outcompeting Salesforce and other providers. Microsoft unveiled a preview of its Azure Container Instances service in a bid to support developers who want to avoid the complexities of virtual machine management.

Dunlap said the announcement is significant because companies are still reluctant to deploy next-generation technologies incorporating containers and microservices, despite their advantages. In particular, Dunlap said providers should focus on explaining the cost-benefit ratios associated with refactoring departmental apps into containers.

By joining CNCF, meantime, Microsoft is “shunning” Amazon in the enterprise cloud market. “Expect to see a lot more platform service rollouts involving containers, microservices, etc., later this year during fall conferences in which cloud rivals continue to attempt to one-up one another,” Dunlap wrote.

Dig deeper into Dunlap’s thoughts on Microsoft’s support for containers.

SIEM for threat detection

Anton Chuvakin, an analyst with Gartner, said security information and event management, or SIEM, is not the best threat detection technology on its own. Based on conversations through Twitter, Chuvakin learned that many network professionals view SIEM as a compliance technology. Chuvakin said he sees these individuals as taking a viewpoint nearly 10 years out of date or perhaps struggling with bad experiences from failed SIEM implementations in the past.

Chuvakin said he uses SIEM for much of his threat detection tasks, but also uses log and traffic analysis, as well as endpoint visibility tools, almost equally. In his view, threat detection that focuses too heavily on the network and endpoints suffer serious security challenges unless they are coupled with log monitoring.

“Based on this logic, log analysis (perhaps using SIEM … or not) is indeed ‘best’ beginner threat detection. On top of this, SIEM will help you centralize and organize your other alerts,” Chuvakin wrote.

Explore more of Chuvakin’s thoughts on SIEM.

For Sale – Seagate external 500gb hard drive

Seagate External Hard Drive 500gb
Very good condition with no errors according to my iMac!

Open to sensible offers as I have no need for it!

Thanks

Price and currency: 35
Delivery: Delivery cost is included within my country
Payment method: BACS PPG
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Attala Systems shows off ‘CPU-less’ FPGA storage gear

No, the entire storage world has not gone software-defined. According to Attala Systems Inc., hardware-defined storage technology represents the next stage of evolution in flash storage.

The startup has emerged from stealth to preview a “CPU-less” storage appliance that consolidates processing, networking and storage functionality on field-programmable gate arrays (FPGAs) based on an Altera chipset. Attala and Intel, which owns Altera, this week will demonstrate the FPGA storage technology at the Flash Memory Summit in Santa Clara, Calif.

Attala calls its system the Attala High Performance Composable Storage Infrastructure, and the vendor positions it mainly as storage for cloud providers and private clouds. It consists of FPGA-powered host interfaces and scale-out data nodes connected via a Remote Direct Memory Access over Converged Ethernet version 2 NVM Express fabric. Attala uses FPGA devices as NVMe storage targets. Instead of a motherboard, the FPGAs also handle processing intelligence and network connectivity.

“Our premise is based on one of the main tenets of computer science: You can implement functionality in hardware much more efficiently than you can in software,” said Taufik Ma, founder of Attala Systems, based in San Jose, Calif.

The trend in storage has been away from expensive custom FPGAs to systems built on common x86 servers, shifting the differentiating features to the software. But Ma said engineering advances have added value to FPGAs.

“We’ve reached the point in the industry where you can pack enough logic, enough gates and enough data paths into a single FPGA. An FPGA is [no longer] just a very expensive prototyping platform, but a perfectly affordable production platform,” Ma added.

Attala’s device tiers flash for price, performance

It remains to be seen if enterprise storage administrators adopt the same view and can be lured to Attala’s unusual storage configuration. The Attala compute layer represents one or more x86 servers. Each server has multiple Ethernet links. The vendor said it plans to offer options for 25 Gigabit Ethernet (GbE), 40 GbE and 50 GbE connectivity.

Attala data nodes are based on standard storage enclosures. Its FPGA sits between the network and the NVMe SSDs. The data layer consists of a system of nodes, each of which can scale to support eight Ethernet links in 40 GbE and 50 GbE options. Each data node is rated by the vendor to deliver up to 400 Gbps of data. The data nodes support 2.5-inch U.2 and M.2 cards for tiered flash infrastructure.

“We call them CPU-less servers. They do have integrated portions of the FPGA fabric, but they’re not entirely proprietary hardware. We’ve got a handful of FPGAs that have network connections on one side and SSDs on the other side,” Ma said.

Attala claims its automated orchestration and provisioning engine allows NVMe SSDs to be mapped across data nodes to specific applications at full network speed. Product shipments are slated to begin later this year.

The Attala Systems storage appliance provides a redundant data path between the network ports and dual-ported flash drives. The appliances replicate data across SSDs within the enclosure as a hedge against drive failure. The FPGAs can be programmed for advanced data services culled from Intel’s Intelligent Storage Acceleration Library.

“There are a lot of legacy software layers that squander the performance of the underlying storage media. An alternative is to install SSDs in the same server that runs your applications, but then you end up with silos or islands of data. Our approach is to unlock the underlying performance of the SSDs, so the resources can be shared across multiple servers,” Ma said.

FPGA aimed at cloud providers, private cloud deployments

Ma most recently spent nine years as a senior executive at networking vendor Emulex Corp. He previously served as a general manager at Intel’s enterprise systems group. He launched Attala Systems in 2015 with Sujith Arramreddy and Sai Gadiraju, founders of ServerWorks and ServerEngines. Broadcom bought ServerWorks for $1 billion in 2001, and Emulex acquired ServerEngines in 2010.

FPGAs have been gaining steam in public cloud environments. Amazon Web Services in April launched the Amazon EC2 F1 compute instance, which is designed to help developers to quickly write custom hardware accelerators to boost application performance. Microsoft uses FPGAs to accelerate its Bing search engine and its networking infrastructure.

Although cloud services providers are the initial focus, Ma said Attala’s storage use cases extend to traditional data centers running big data, e-commerce, financial and other high-frequency applications. He said these types of customers are testing Attala in preproduction, although he declined to identify any of the companies. He said Attala expects to reveal pricing and channel partnerships in the fourth quarter.