Tag Archives: According

Allo Sparky Single Board Computer

New Boxed Allo Sparky. According to Allo this sets the standard for SBCs to be used as audio players.
Lots of audio HATS available from Allo.
New and Boxed No PSU but uses any 5v micro usb PSU like the pi
Sparky SBC (Motherboard) – EU

Price and currency: £35
Delivery: Delivery cost is included within my country

Location: leeds
Advertised elsewhere?: Not advertised elsewhere
Prefer goods…

Allo Sparky Single Board Computer

Allo Sparky Single Board Computer

New Boxed Allo Sparky. According to Allo this sets the standard for SBCs to be used as audio players.
Lots of audio HATS available from Allo.
New and Boxed No PSU but uses any 5v micro usb PSU like the pi
Sparky SBC (Motherboard) – EU

Price and currency: £35
Delivery: Delivery cost is included within my country

Location: leeds
Advertised elsewhere?: Not advertised elsewhere
Prefer goods…

Allo Sparky Single Board Computer

Challenges in cloud data security lead to a lack of confidence

Enterprise cloud use is full of contradictions, according to new research.

The “2017 Global Cloud Data Security Study,” conducted by Ponemon Institute and sponsored by security vendor Gemalto, found that one reason enterprises use cloud services is for the security benefits, but respondents were divided on whether cloud data security is realistic, particularly for sensitive information.

“More companies are selecting cloud providers because they will improve security,” the report stated. “While cost and faster deployment time are the most important criteria for selecting a cloud provider, security has increased from 12% of respondents in 2015 to 26% in 2017.”

Although 74% of respondents said their organization is either a heavy or moderate user of the cloud, nearly half (43%) said they are not confident that their organization’s IT department knows about all the cloud computing services it currently uses.

In addition, less than half of respondents said their organization has defined roles and accountability for cloud data security. While this number (46%) was up in 2017 — from 43% in 2016 and from 38% in 2015 — it is still low, especially considering the type of information that is stored in the cloud the most.

Customer data is at the highest risk

According to the survey findings, the primary types of data stored in the cloud are customer information, email, consumer data, employee records and payment information. At the same time, the data considered to be most at risk, according to the report, is payment information and customer information.

“Regulated data such as payment and customer information continue to be most at risk,” the report stated. “Because of the sensitivity of the data and the need to comply with privacy and data protection regulations, companies worry most about payment and customer information.”

Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way.
Jason Hartvice president and CTO of data protection, Gemalto

One possible explanation for why respondents feel that sensitive data is at risk is that cloud data security is tough to actually achieve.

“The cloud is storing all types of information from personally identifiable information to passwords to credit cards,” said Jason Hart, vice president and CTO of data protection at Gemalto. “In some cases, people don’t know where data is stored, and more importantly, how easy it is to access by unauthorized people. Most organizations don’t have data classification policies for security or consider the security risks; instead, they’re worrying about the perimeter. From a risk point of view, all data has a risk value.”

The biggest reason it is so difficult to secure the cloud, according to the study, is that it’s more difficult to apply conventional infosec practices in the cloud. The next most cited reason is that it is more difficult for enterprises to assess the cloud provider for compliance with security best practices and standards. The majority of respondents (71% and 67%, respectively) feel those are the biggest challenges, but also note that it is more difficult to control or restrict end-user access to the cloud, which also provides some security challenges.

“To solve both of these challenges, enterprises should have control and visibility over their security throughout the cloud, and being able to enforce, develop and monitor security policies is key to ensuring an integrity,” Hart said. “People will apply the appropriate controls once they’re able to understand the risks towards their data.”

Despite the challenges in cloud data security and the perceived security risks to sensitive data stored in the cloud, all-around confidence in cloud computing is on the rise — slightly. The 25% of respondents who said they are “very confident” their organization knows about all the cloud computing services it currently uses is up from 19% in 2015. Fewer people (43%) said they were “not confident” in 2017 compared to 55% in 2015.

“Having tracked the progress of cloud security over the years, when we say ‘confidence in the cloud is up,’ we mean that we’ve come a long way,” Hart said. “After all, in the beginning, many companies were interested in leveraging the cloud but had significant concerns about security.”

Hart noted that, despite all the improvements to business workflows that the cloud has provided, security is still an issue. “Security has always been a concern and continues to be,” he said. “Security in the cloud can be improved if the security control is applied to the data itself.”

Ponemon sampled over 3,200 experienced IT and IT security practitioners in the U.S., the United Kingdom, Australia, Germany, France, Japan, India and Brazil who are actively involved in their organization’s use of cloud services.

News briefs: Mobile recruiting interfaces still painful

Mobile recruiting platforms aren’t getting enough attention from HR departments, according to a recent Glassdoor report. Mobile interfaces are clunky and hard to use. They impose required fields that duplicate data that’s already on the résumé.

“Mobile job application experiences remain painful for most job seekers,” said Andrew Chamberlain, Glassdoor’s chief economist, in a report on upcoming trends. This is a problem for employers. Many job seekers today are using mobile devices to reach employer job sites.

It is a consequence of legacy enterprise applicant tracking systems (ATSes) built before the mobile era. Firms are waking up to this fact, and Glassdoor believes improving mobile recruiting systems is on the verge of becoming a priority.

A lot of organizations have a hodgepodge of HR systems. Their primary goal is moving to cloud and to mobile more quickly, said Tony DiRomualdo, senior director of the HR executive advisory program at The Hackett Group, based in Miami.

But mobile is only “widely implemented” in 16% of organizations surveyed last fall by Hackett. DiRomualdo said he believes the percentage is higher for mobile recruiting platforms, because it’s easier to make a business case.  

Mobile recruiting implementation “has been slower than a lot of people in HR would like,” DiRomualdo said. “They have a hard time getting the funding and prioritization for it,” he said.

A new recruiting platform with ATS-like systems

Mobile job application experiences remain painful for most job seekers.
Andrew Chamberlainchief economist, Glassdoor

Recruiting platform vendors are taking on some of the work of internal applicant tracking systems and can give job seekers a better mobile experience. They are creating dashboards and intelligent ranking systems. JobzMall, the latest addition to this trend, is due to launch Jan. 15.

The site, which has about 250 participating organizations and is running in a closed beta, organizes itself around a “virtual shopping mall,” said Nathan Candaner, co-founder of JobzMall, based in Irvine, Calif.

Employers have virtual stores and can use video to create a personalized experience about their business. There are different buildings — such as the startup building, one for nonprofits, another for freelancers and one for larger firms. Job seekers fill out a template on the recruiting platform, which they can use to apply for multiple jobs. The system gives applicants a little more transparency into the progress of their application.

Candaner said he sees a need for this type of recruiting platform. Many job sites today want users to cut and paste their résumés for each job application. The systems give employers little help in managing the applications.

JobzMall gives employers a dashboard, which includes collaborative tools, for managing and viewing applicants in one spot. The system knows what the qualifications are and the skill sets of the applicants. It also learns the employer’s behavior in evaluating candidates. It uses that to help rank and select applicants. “Our system learns, and in time, we do give very pointed candidates to required jobs,” Candaner said.

FBI hacking may have crossed international borders

An FBI hacking operation may have crossed international borders, according to new court documents, but experts are unsure what the consequences may be.

The details come as part of court filings in the appeal of David Tippens in connection to the Playpen dark web child pornography website. During the investigation, the FBI reportedly seized the Playpen server but kept it running for 13 days. During this time an FBI hacking operation deployed a network investigative technique, malware designed by the FBI to gather information on users of the website.

According to the court filing first obtained by The Daily Beast, the FBI hacking operation “ultimately seized 8,713 IP addresses and other identifying data from computers located throughout the United States and in 120 other countries, including Russia, Iran and China, as well as data from an entity the government described as ‘a satellite provider.'”

Experts like Philip Lieberman, president of Los Angeles-based Lieberman Software, told SearchSecurity this was an especially tricky scenario because “the FBI is not authorized for foreign operations” according to its federal mandate.

Nicholas Weaver, computer security researcher at the International Computer Science Institute in Berkeley, Calif., said the anonymity of the dark web makes this FBI hacking case more complicated.

Robert Cattanach, partner at Dorsey and Whitney LLP, based in Minneapolis, agreed that the FBI might not “know the physical location of the computer until it accesses the computer,” but said international cooperation in these investigations might be possible even with countries like Russia, China and Iran.

“In an area like child pornography one would not expect a lot of friction, even from these kinds of countries, but in the delicate world of international relations the best result one could often hope for is for the foreign government to ‘look the other way’ as we accessed computers to build a case against a U.S. resident,” Cattanach told SearchSecurity. “Of course, we might also be willing to ‘look the other way’ in similar situations. If a foreign power is pressing the envelope to gain access to national security or confidential business information however that would be an entirely different story.”

Network redundancy design does not always equal resiliency

Network redundancy design isn’t everything, according to Ivan Pepelnjak, who tackles the subject of whether redundancy equals resiliency in an IPSpace post. His conclusion: Full redundancy doesn’t necessarily result in greater resiliency, but network redundancy design can help decrease the probability of a failure occurring.

Many companies have adopted site reliability engineers, a term that Pepelnjak suggests is becoming watered down. In some cases, these engineers sometimes trigger unanticipated failures — either manually or automatically — through mistaken actions intended to shore up redundancy. What’s more, statistics suggest that added redundancy decreases availability during “gray failure” events, when components’ performance may only be subtly degraded.

“In reality, we keep heaping layers of leaky abstractions and ever-more-convoluted kludges on top of each other until the whole thing comes crashing down resulting in days of downtime,” Pepelnjak said, adding that Vint Cerf may have said it best in a recent article, when he wrote that, when it comes to network redundancy design, “We’re facing a brittle and fragile future.”

Read more of Pepelnjak’s thoughts on network redundancy design. 

WLAN design with iBwave

Lee Badman, blogging in Wirednot, shared his assessment of the new IBwave R9 software for WLAN design. Badman identified pre-existing features with earlier versions of the software that he liked, among them 3D modeling of the WLAN environment, modeling for inclined surfaces and synchronization with the cloud for survey projects. The software package also included a mobile app and a viewer for customers to gain insight on the design team’s viewpoint, without requiring the purchase of the IBwave software.

In the new version of the IBwave design suite, the software offers an improved user interface and the ability to institute coverage exclusion zones, as well as interoperating with software-defined radios. Badman praised the software’s inclusion of smart antenna contouring, which allows users to manipulate simulated access points to determine signal strength once a floor plan is known. Additionally, IBwave includes auto cable routing, a feature that maps cables virtually after a cable tray and router location are placed.

Dig deeper into Badman’s thoughts on IBwave for WLAN design.

Adding a secure management plane in the cloud

Jon Oltsik, an analyst at Enterprise Strategy Group in Milford, Mass., sees many cases of cybersecurity professionals installing management servers on their networks to avoid disruptive change. “Given the history of cybersecurity, this behavior is certainly understandable — I control what happens on my own network but have almost no oversight what takes place on Amazon Web Services, Azure, or Google Cloud Platform. Yup, there’s a lot of history and dogma here, but I believe it’s time for CISOs to reconsider,” he said.

Oltsik recommends a secure cloud-based management plane because of reduced costs, more rapid product upgrades and more rapid evolution and rollout of products. He also sees security operations and analytics platform architecture being deployed more rapidly through cloud-based management planes. To gain control, Oltsik recommends that buyers request standard documented APIs from vendors so that users have a say over when and how much data to ingest. “The benefits of moving to a cloud-based security management model speak for themselves. Given this, old school CISOs should think long and hard about maintaining the status quo,” Oltsik added.

Explore more of Oltsik’s thoughts on a secure cloud-based management plane.

How to avoid common challenges when migrating to microservices

SAN JOSE, Calif. — Migrating to microservices calls for a significant technology transformation, according to speakers at this week’s API World 2017. In this article, three microservices experts — Irakli Nadareishvili, Eric Roch and Chris Tozzi — expose and offer advice about two common technical microservices challenges.

“Understanding how to build a microservice architecture can be painful, just like any other significant transformation can,” said Nadareishvili, senior director of technology at Capital One, based in McLean, Va., in a preconference interview. He elaborated during his API World 2017 session, “Implementing microservices at Capital One.”

Nadareishvilli, Roch and Tozzi offer alternatives for the two most common technical mistakes in migrating to microservices: not properly decoupling a distributed system and not resizing microservices. “Overcoming microservices challenges” was the title of Roch’s API World 2017 session. He is CTO of enterprise architecture for cloud provider Perficient, based in St. Louis. Tozzi is a DevOps analyst for Fixate IO in Livermore, Calif.

Technical challenge No. 1: Not loosely coupling

Irakli Nadareishvili, Capital One senior director of technologyIrakli Nadareishvili

Rather than building a microservices architecture, Nadareishvili has seen companies creating a distributed monolith, in which the organization can’t change one service without affecting another. This goes against the grain of the microservices architecture, which should reduce coordination between systems.

In this worst-case scenario, Tozzi noted, the organization must run and manage applications that are deployed as distinct services, but in which those services remain interdependent. “For example, you might have a front-end service that only works if it can connect to a database that is configured in a certain way,” he explained. That database, in turn, expects the front-end service to be configured in a particular way. “If you’re not distributing apps into independent services, you might as well still be running a monolith.”

Chris Tozzi, DevOps analyst for Fixate IOChris Tozzi

Nadareishvili recommended using Ben Christensen’s Rule of Twos to avoid coupling in certain scenarios as one way to avoid creating distributed monoliths. This practice calls for making sure the infrastructure supports at least two alternatives for a critical component.

Always think “loose coupling” when migrating to microservices, and don’t cling to the SOA model, Tozzi said. “The best way to avoid the distributed monolith anti-pattern is to ensure that each microservice in your application can start and run independently of others,” he said. “The configuration for each service should also be self-contained and able to be modified without affecting other services or apps.”

Microservices challenge No. 2: Right-sizing

In his work with DevOps teams, Roch said he sees them struggling with how to right-size microservices. They’re often not sure of how micro a microservice should be.

One option, Roch said, is using the domain-driven design (DDD) bounded contexts as the sizing guide. There are, however, two major problems with this approach:

  • Sizing microservices with DDD only works at early stages. This means the perfect size of a microservice doesn’t exist. The right size of a microservice is a function of time; the size decreases over time. “As teams mature in their practice of microservices and the understanding of their problem domain, larger microservices usually split and overall granularity increases,” Roch explained.
  • Even at the early stages, using formal DDD is hard. It requires some expertise and experience. “I’ve met many more teams that were aware of DDD than teams that were actually practicing it,” Roch said. “And I believe the reason is complexity of formal [and] conventional DDD analysis.”

If you’re not distributing apps into independent services, you might as well still be running a monolith.
Chris TozziDevOps analyst, Fixate IO

Roch recommended using bounded contexts for the initial sizing of microservices. Beyond that, he said, use a group modeling technique, event storming, for discovering them instead of more traditional DDD approaches. “I am a big fan of event storming both for microservices, as well as in general for understanding a domain,” Roch said. He recommended it for any style of architecture. “It brings unparalleled understanding of the problem domain shared between product owners and tech teams.”

One tricky part of using event storming is the final artifact is impossible to capture in any reasonable way. The final artifact, Roch said, is very long strips of paper with hundreds of “stickies” on them. When his team goes through an event-storming exercise, he uses the Microservices Design Canvas from API Academy for capturing the final design. This tool is used to identify the desired attributes of a service before it is developed. “These two tools in combination work really well for us,” he said.

Microservices challenges bottom line: Reduce coordination

Always pay attention to what causes coordination in your system, the experts said. Finding and removing those couplings will speed the migration to microservices. “Microservice architecture is primarily about reducing coordination needs,” Roch concluded. “Everything else is largely secondary.”

Calling ‘all aboard’ on the six-month Java release train

For more than 20 years, according to Mark Reinhold, “the Java SE Platform and the JDK have evolved in large, irregular, and somewhat unpredictable steps.” But in a blog post on Sept. 6, Reinhold, chief architect of the Java platform group at Oracle, proposed changes to the way future Java Development Kit releases will happen.

Historically, every new Java release has been feature-driven. Each release proposes milestones and Java release dates, but those dates tend to get pushed back if there are issues incorporating certain features. Security issues and the incorporation of Project Lambda delayed the version 8 release, and ongoing disagreements about Project Jigsaw will mean a delay in the version 9 Java release.

Java release cynicism

Given a history of having release dates slip by, some sarcastically said the most impressive feature of Java 7 was the fact it was released on time. With features driving the release, dates become somewhat malleable. But all of that is about to change.

“Taking inspiration from the release models used by other platforms and by various operating-system distributions, I propose that after Java 9 we adopt a strict, time-based model with a new feature release every six months, update releases every quarter, and a long-term support release every three years,” Reinhold wrote is his Moving Java Forward Faster blog.

The Java release train

I think the biggest part of the challenge is sticking to the plan.
Gil TeneCTO at Azul Systems

While this approach is new to Java, it’s a fairly common release management system in other areas of the software development world. Known as a release train, the idea is the release train always leaves on time, regardless of whether or not a planned feature is onboard. If the feature makes it, that’s great. If not, it will have to wait for the next train in another six months. So the release always leaves on time, whether or not planned functionality is included.

So how feasible is regularly scheduling a Java release train? “I do think there’s a challenge here,” said Gil Tene, CTO at Azul Systems Inc., based in Sunnyvale, Calif. “But I think the biggest part of the challenge is sticking to the plan. It would require the discipline to say, ‘We are not holding back a release for any feature.’ Then, you can stick to it.”

The next long-term support Java release?

As a whole, the community seems to be receiving the proposed changes to the Java release cycle positively, but there are concerns — one of which pertains to whether or not Java 9 will be a long-term support release. That puts organizations looking to go into production with a Java 9 JDK in a bit of a predicament.

To learn more about the announcement, along with some of the potential challenges organizations might face if the Java release train becomes the new normal, listen to the accompanying podcast in which Cameron McKenzie speaks with Azul’s Gil Tene about the effect a six-month Java release cycle will have on organizations.

For Sale – Seagate external 500gb hard drive

Seagate External Hard Drive 500gb
Very good condition with no errors according to my iMac!

Open to sensible offers as I have no need for it!


Price and currency: 35
Delivery: Delivery cost is included within my country
Payment method: BACS PPG
Location: Bolton
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Cybersecurity machine learning moves ahead with vendor push

Cybersecurity machine learning is growing in popularity, according to Jon Oltsik, an analyst with Enterprise Strategy Group Inc. in Milford, Mass. Oltsik attended the recent Black Hat conference, where technology vendors were abuzz with talk of cybersecurity machine learning.

ESG research asked 412 respondents about their understanding of artificial intelligence (AI) and cybersecurity machine learning, which revealed that only 30% said they were very knowledgeable on the subject. Only 12% of respondents said their organizations had deployed these systems widely.

According to Olstik, the cybersecurity industry sees an opportunity, because only 6% of respondents in surveys said their organizations were not considering AI or machine learning deployments. He said companies will need to educate the market, identify use cases, work with existing technologies and provide good support.

“I find machine learning [and] AI technology extremely cool but no one is buying technology for technology sake. The best tools will help CISOs improve security efficacy, operational efficiency, and business enablement,” Oltsik wrote.

Read more of Oltsik’s thoughts on cybersecurity machine learning.

Microsoft leverages Kubernetes backing for containers

Microsoft is positioning itself to fight back against the success of Amazon Web Services, according to Charlotte Dunlap, an analyst with Current Analysis in Sterling, Va.

The company launched a new container service and joined the Cloud Native Computing Foundation (CNCF) amidst earnings reports indicating that its Azure platform is outcompeting Salesforce and other providers. Microsoft unveiled a preview of its Azure Container Instances service in a bid to support developers who want to avoid the complexities of virtual machine management.

Dunlap said the announcement is significant because companies are still reluctant to deploy next-generation technologies incorporating containers and microservices, despite their advantages. In particular, Dunlap said providers should focus on explaining the cost-benefit ratios associated with refactoring departmental apps into containers.

By joining CNCF, meantime, Microsoft is “shunning” Amazon in the enterprise cloud market. “Expect to see a lot more platform service rollouts involving containers, microservices, etc., later this year during fall conferences in which cloud rivals continue to attempt to one-up one another,” Dunlap wrote.

Dig deeper into Dunlap’s thoughts on Microsoft’s support for containers.

SIEM for threat detection

Anton Chuvakin, an analyst with Gartner, said security information and event management, or SIEM, is not the best threat detection technology on its own. Based on conversations through Twitter, Chuvakin learned that many network professionals view SIEM as a compliance technology. Chuvakin said he sees these individuals as taking a viewpoint nearly 10 years out of date or perhaps struggling with bad experiences from failed SIEM implementations in the past.

Chuvakin said he uses SIEM for much of his threat detection tasks, but also uses log and traffic analysis, as well as endpoint visibility tools, almost equally. In his view, threat detection that focuses too heavily on the network and endpoints suffer serious security challenges unless they are coupled with log monitoring.

“Based on this logic, log analysis (perhaps using SIEM … or not) is indeed ‘best’ beginner threat detection. On top of this, SIEM will help you centralize and organize your other alerts,” Chuvakin wrote.

Explore more of Chuvakin’s thoughts on SIEM.