Tag Archives: support

1150 Motherboard

Hi
I was wondering if any of you guys had a 1150 motherboard for sale that would support my i7 4770

Budget cheap as possible

Location: Bournemouth

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should…

1150 Motherboard

CIOs need an AI infrastructure, but it won’t come easy

CIOs are starting to rethink the infrastructure stack required to support artificial intelligence technologies, according to experts at the Deep Learning Summit in San Francisco. In the past, enterprise architectures coalesced around efficient technology stacks for business processes supported by mainframes, then by minicomputers, client servers, the internet and now cloud computing. But every level of infrastructure is now up for grabs in the rush to take advantage of AI.

“There were well-defined winners that became the default stack around questions like how to run Oracle and what PDP was used for,” said Ashmeet Sidana, founder and managing partner of Engineering Capital, referring to the Programmed Data Processor, an older model of minicomputer

“Now, for the first time, we are seeing that every layer of that stack is up for grabs, from the CPU and GPU all the way up to which frameworks should be used and where to get data from,” said Sidana, who serves as chief engineer of the venture capital firm, based in Menlo Park, Calif.

The stakes are high for building an AI infrastructure — startups, as well as legacy enterprises, could achieve huge advantages by innovating at every level of this emerging stack for AI, according to speakers at the conference.

But the job won’t be easy for CIOs faced with a fast-evolving field where the vendor pecking order is not yet settled, and their technology decisions will have a dramatic impact on software development. An AI infrastructure requires a new development model that requires a more statistical, rather than deterministic, process. On the vendor front, Google’s TensorFlow technology has emerged as an early winner, but it faces production and customization challenges. Making matters more complicated, CIOs also must decide whether to deploy AI infrastructure on private hardware or use the cloud.

New skills required for AI infrastructure

Ashmeet Sidana, chief engineer at  Engineering CapitalAshmeet Sidana

Traditional application development approaches build deterministic apps with well-defined best practices. But AI involves an inherently statistical process. “There is a discomfort in moving from one realm to the other,” Sidana said. Acknowledging this shift and understanding its ramifications will be critical to bringing the enterprise into the machine learning and AI space, he said. 

The biggest ramification is also AI’s dirty little secret: The types of AI that will prove most useful to the enterprise, machine learning and especially deep learning approaches, work great only with great data — both quantity and quality. With algorithms becoming more commoditized, what used to be AI’s major rate-limiting feature — the complexity of developing the software algorithms — is being supplanted by a new hurdle: the complexity of data preparation. “When we have perfect AI algorithms, all the software engineers will become data-preparation engineers,” Sidana said.

Then, there are the all-important platform questions that need to be settled. In theory, CIOs can deploy AI workloads anywhere in the cloud, as cloud providers like Amazon, Google and Microsoft, to name just some, can provide almost bare-metal GPU machines for the most demanding problems. But conference speakers stressed the reality requires CIOs to carefully analyze their needs and objectives before making a decision.

TensorFlow examined

There are a number of deep learning frameworks, but most are focused on academic research. Google’s is perhaps the most mature framework from a production standpoint, but it still has limitations, AI experts noted at the conference.

Eli David, CTO at Deep InstinctEli David

Eli David, CTO of Deep Instinct, a startup based in Tel Aviv that applies deep learning to cybersecurity, said TensorFlow is a good choice when implementing specific kinds of well-defined workloads like image recognition or speech recognition.

But he cautioned it requires heavy customization for seemingly simple changes like analyzing circular, rather than rectangular, images. “You can do high-level things with the building blocks, but the moment you want to do something a bit different, you cannot do that easily,” David said.

The machine learning platform that Deep Instinct built to improve the detection of cyberthreats by analyzing infrastructure data, for example, was designed to ingest a number of data types that are not well-suited to TensorFlow or existing cloud AI services. As a result, the company built its own deep learning systems on private infrastructure, rather than running it in the cloud.

“I talk to many CIOs that do machine learning in a lab, but have problems in production, because of the inherent inefficiencies in TensorFlow,” David said. He said his team also encountered production issues with implementing deep learning inference algorithms based on TensorFlow on devices with limited memory that require dependencies on external libraries. As more deep learning frameworks are designed for production, rather than just for research environments, he said he expects providers will address these issues.

Separate training from deployment

It is also important for CIOs to make a separation between training and deployment of deep learning algorithms, said Evan Sparks, CEO of San Francisco-based Determined AI, a service for training and deploying deep learning models. The training side often benefits from the latest and fastest GPUs.  Deployments are another matter. “I pushed back on the assumption that deep learning training has to happen in the cloud. A lot of people we talk to eventually realize that cloud GPUs are five to 10 times more expensive than buying them on premise,” Sparks said.

Deployment targets can include web services, mobile devices or autonomous cars. The latter may have power, processing efficiency and latency issues that may be critical and might not be able to depend on a network. “I think when you see friction when moving from research to deployment, it is as much about the researchers not designing for deployment as limitations in the tools,” Sparks said. 

Private Slack shared channels look to boost security, admin controls

Slack is expanding support for external collaboration with a beta release of private shared channels, which should allow separate organizations to communicate more securely across Slack workspaces. Slack announced a beta of public shared channels in September, and earlier this month introduced private Slack shared channels for conversations that could include sensitive or classified information.

The shared channels feature will become more important as large enterprises look to improve the adoption of social tools, Constellation Research analyst Alan Lepofsky wrote in a blog post. Lepofsky said private shared channels will be a more common use case than public shared channels because most cross-organizational communications are better suited to a limited audience.

To access private Slack shared channels, users need to be invited to view or join the channel, and any content shared in the channel won’t appear in search results to non-members.

Nemertes Research analyst Irwin Lazar said his firm’s upcoming unified communications and collaboration study found that nearly 20% of organizations plan to use or are already using team collaboration apps for external communication with partners, suppliers and customers — an increase from last year’s study.

“There are still issues to overcome, like whether or not the external participant needs to archive the conversations,” Lazar said.

Slack private shared channels support more secure external collaboration
Slack’s private shared channels allow organizations to collaborate with external users.

Management options for secure Slack shared channels

Private Slack shared channels offer IT management options to protect information. Admins can choose whether a specific shared channel is private or public in their respective workspace. Channels can also be designated private or public on both ends or private on one end and public on the other.

Admins can view the external workspaces their organization is connected to, create new shared channels, view pending shared channel invites and stop sharing any or all shared channels. However, admins cannot view names or content of any private shared channel of which they are not a member.

The private shared channel beta is currently available to teams on the standard and plus plans. Support for Enterprise Grid is expected soon, Slack said.

External collaboration still in a silo

While the beta boosts external collaboration for Slack users, it doesn’t address the need for interoperability among team collaboration apps.

“Until social networking supports cross-product communication, communication with people that use different products will remain a challenge,” Lepofsky said.

Lazar said IT leaders have expressed concern over app overload. Because of the lack of interoperability, users often juggle multiple team collaboration apps to meet their external collaboration needs.

“This is common in the consumer space, where people routinely use multiple text and social apps for communication, but it creates governance and compliance headaches within enterprises,” Lazar said.

Wanted – (Ryzen) B350 Motherboard & Mid-tower case

Looking for ryzen motherboard that support oc, and a mid tower case for the build.

Preferably the likes of NZXT and fractal for the case, but open to offers!

Approx. Budget:

£40~ for mobo
£40~ for case

Location: North Lincolnshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Wanted – (Ryzen) B350 Motherboard & Mid-tower case

Looking for ryzen motherboard that support oc, and a mid tower case for the build.

Preferably the likes of NZXT and fractal for the case, but open to offers!

Approx. Budget:

£40~ for mobo
£40~ for case

Location: North Lincolnshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

Wanted – (Ryzen) B350 Motherboard & Mid-tower case

Looking for ryzen motherboard that support oc, and a mid tower case for the build.

Preferably the likes of NZXT and fractal for the case, but open to offers!

Approx. Budget:

£40 for mobo
£30 for case

Location: North Lincolnshire

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

SAP defends S/4HANA HCM upgrade amid questions

SAP’s new plan to extend on-premises ERP human capital management support from 2025 to 2030 is raising eyebrows. That’s because maintaining on-premises systems means migrating to S/4HANA HCM, which complicates the decision-making for on-premises users.

The upgrade won’t be available until 2023, and the license for the S/4HANA HCM only runs until “at least” 2030. This doesn’t prevent the vendor from extending support, but users, for now, don’t know.

SAP said it has about 14,000 on-premises HCM customers, with about 80% outside of North America.

SAP believes the S/4HANA HCM migration is best for these users, arguing that the upgrade cost will be offset by the benefits of the in-memory system.

But cloud-based SAP SuccessFactors remains the ultimate HCM upgrade path for its users. This means uncertainty about whether on-premises systems will continue.

“The important thing is that the intention for this is to make life easier for our customers so that they can run their business in the best possible way,” said Amy Wilson, an SAP product manager, in an interview. “It’s not something that we’ve concocted for any other purpose other than this is what our customers are asking for. We care about them, and we empathize with them,” she said.

But there are questions. “Simply extending support for the existing R/3 SAP HCM would probably be very difficult, especially as support has already been extended from 2020 to 2025,” said Paul Cooper, chairman of the U.K. and Ireland SAP User Group, in a statement in response to questions from TechTarget.

“At this stage, it is far enough out for SAP to be encouraging users to move to S/4, rather than extending the deadline further,” Cooper said.

User-group surveys “highlighted that many users have concerns regarding the integration of legacy [SAP HR] and new apps [SuccessFactors] with S/4HANA, so the S/4 HCM announcement does begin to help address this to an extent,” Cooper said.

But Cooper said the announcement also raises issues.

“However, without seeing the product and more information on its scope, it is difficult to judge the complexity of the migration,” Cooper said. “Projects of this nature are time-consuming, resource-hungry and, therefore, they can be disruptive to an organization. The question for a lot of organizations will be, are they prepared to buy and license ‘on-premises’ software that only has a potential life of seven years and won’t be available until 2023?”

Wilson said the upgrade to S/4HANA HCM should be “nondisruptive.”

It’s not something that we’ve concocted for any other purpose other than this is what our customers are asking for.
Amy Wilsonproduct manager at SAP

“In the on-premises world, there’s technical upgrades and then there’s functional upgrades. When you’re talking about a technical upgrade, there is some work to do and some testing and that sort of thing, but it’s more similar to a cloud update,” Wilson said.

SAP’s plan has drawn sharp criticism from Jarret Pazahanick, an SAP HCM consultant who is managing partner of EIC Experts, based in Houston. He also has a large following on his Global SAP and SuccessFactors LinkedIn groups.

Pazahanick said he believes SAP should have extended its support beyond 2025 for its HCM customers.

SAP’s action is “not customer-centric,” Pazahanick said in an email responding to TechTarget questions.

“They are asking loyal maintenance paying customers to wait at least five years [2023] to upgrade to a new SAP HCM on-premises ‘sidecar’ offering and take full ownership of the migration cost and risk associated with that,” Pazahanick said.

There is no native HCM in S/4HANA, hence the “sidecar” meaning. SAP HCM for S/4HANA will run on a separate instance, but tightly integrated with S/4HANA. SAP said it has the tools and services to support this approach.

“All SAP had to do was make an announcement of, ‘We will support SAP HCM and customers until 2030 on their current offering,’ which would have been simple, clear and, ultimately, what some portion of their customer base wants,” Pazahanick said.

Kubernetes networking expands its horizons with service mesh

Enterprise IT operations pros who support microservices face a thorny challenge with Kubernetes networking, but service mesh architectures could help address their concerns.

Kubernetes networking under traditional methods faces performance bottlenecks. Centralized network resources must handle an order of magnitude more connections once the user migrates from VMs to containers. As containers appear and disappear much more frequently, managing those connections at scale quickly can create confusion on the network, and stale information inside network management resources can even misdirect traffic.

IT pros at KubeCon this month got a glimpse at how early adopters of microservices have approached Kubernetes networking issues with service mesh architectures. These network setups are built around sidecar containers, which act as a proxy for application containers on internal networks. Such proxies offload networking functions from application containers and offer a reliable way to track and apply network security policies to ephemeral resources from a centralized management interface.

Proxies in a service mesh better handle one-time connections between microservices than can be done with traditional networking models. Service mesh proxies also tap telemetry information that IT admins can’t get from other Kubernetes networking approaches, such as transmission success rates, latencies and traffic volume on a container-by-container basis.

“The network should be transparent to the application,” said Matt Klein, a software engineer at San Francisco-based Lyft, which developed the Envoy proxy system to address networking obstacles as the ride-sharing company moved to a microservices architecture over the last five years.

“People didn’t trust those services, and there weren’t tools that would allow people to write their business logic and not focus on all the faults that were happening in the network,” Klein said.

With a sidecar proxy in Envoy, each of Lyft’s services only had to understand its local portion of the network, and the application language no longer factored in its function. At the time, only the most demanding web application required proxy technology such as Envoy. But now, the complexity of microservices networking makes service mesh relevant to more mainstream IT shops.

The National Center for Biotechnology Information (NCBI) in Bethesda, Md., has laid the groundwork for microservices with a service mesh built around Linkerd, which was developed by Buoyant. The bioinformatics institute used Linkerd to modernize legacy applications, some as many as 30 years old, said Borys Pierov, a software developer at NCBI.

Any app that uses the HTTP protocol can point to the Linkerd proxy, which gives NCBI engineers improved visibility and control over advanced routing rules in the legacy infrastructure, Pierov said. While NCBI doesn’t use Kubernetes yet — it uses HashiCorp Consul and CoreOS rkt container runtime instead of Kubernetes and Docker — service mesh will be key to container networking on any platform.

“Linkerd gave us a look behind the scenes of our apps and an idea of how to split them into microservices,” Pierov said. “Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.”

Matt Klein speaks at KubeCon
Matt Klein, software engineer at Lyft, presents the company’s experiences with service mesh architectures at KubeCon.

Kubernetes networking will cozy up with service mesh next year

Linkerd is one of the most well-known and widely used tools among the multiple open source service mesh projects in various stages of development. However, Envoy has gained notoriety because it underpins a fresh approach to the centralized management layer, called Istio. This month, Buoyant also introduced a better performing and efficient successor to Linkerd, called Conduit.

Linkerd gave us a look behind the scenes of our apps … Some things were deployed in strange ways, and microservices will change those deployments, including the service mesh that moves us to a more modern infrastructure.
Borys Pierovsoftware developer, National Center for Biotechnology Information

It’s still too early for any of these projects to be declared the winner. The Cloud Native Computing Foundation (CNCF) invited Istio’s developers, which include IBM, Microsoft and Lyft, to make the Istio CNCF project, CNCF COO Chris Aniszczyk said at KubeCon. But Buoyant also will formally present Conduit to the CNCF next year, and multiple projects could coexist within the foundation, Aniszczyk said.

Kubernetes networking challenges led Gannett’s USA Today Network to create its own “terrible, over-orchestrated” service mesh-like system, in the words of Ronald Lipke, senior engineer on the USA Today platform-as-a-service team, who presented on the organization’s Kubernetes experience at KubeCon. HAProxy and the Calico network management system have supported Kubernetes networking in production so far, but there have been problems under this system with terminating nodes cleanly and removing them from Calico quickly so traffic isn’t misdirected.

Lipke likes the service mesh approach, but it’s not yet a top priority for his team at this early stage of Kubernetes deployment. “No one’s really asking for it yet, so it’s taken a back seat,” he said.

This will change in the new year. The company plans to rethink the HAproxy approach to reduce its cloud resource costs and improve network tracing for monitoring purposes. The company has done proof-of-concept evaluations around Linkerd and plans to look at Conduit, he said in an interview after his KubeCon session.

Beth Pariseau is senior news writer for TechTarget’s Data Center and Virtualization Media Group. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

VMware NSX-T updates include new firewalls, load balancing

VMware made significant changes to NSX-T 2.0, released in November, adding native support for microsegmentation and containers, and will follow up shortly with NSX 2.1, expected by February 2018.

VMware brings many different but extremely useful tools to the table in these NSX-T releases. From a large infrastructure point of view, they provide more features and flexibility to design the network the way administrators and security teams want them to be designed.

There are two versions of NSX: NSX for vSphere and NSX-T. NSX for vSphere is more widely deployed; NSX T focuses on multi-hypervisor and cloud-native environments. In this article, we’ll look at what’s different between NSX-T 1.0 and 2.1.

NSX-T updates ease firewall management

The major news in VMware NSX-T’s latest versions is support for microsegmentation. Micro-segmentation is a big deal because it provides a new security paradigm for the cloud and large-scale environments.

Historically, firewalls were mostly a north-south proposition — i.e., inbound and outbound traffic from the rest of the internet/network. With NSX, firewall rules can be applied to individual VMs, groups of VMs and many other scenarios that were once difficult or not possible.

IT departments typically devote about 8% of their budgets to perimeter security. These rules also follow the VM as it moves around. In short, it helps harden the interior infrastructure that’s usually the soft spot for attackers. Think about it as a hard exterior shell with a soft inner shell. It changes the game. No other cloud vendor has anything like this.

VMware brings many different but extremely useful tools to the table in these NSX-T releases.

Now that everything is automated, you can easily implement firewall rules and manage them all centrally. Alongside this new firewall is the distributed network encryption between VMs/containers — when all items are within the same virtual domain, of course. Again, this functionality helps stop things like network eavesdropping by undesirables.

The complexity is without a doubt the overriding issue. Manually managing VMs in a massive environment becomes complex, if not unfeasible. With NSX, the network traffic and associated management for east-west traffic can be implemented more easily. It used to be quite complex to implement firewalls at the VM level, but not anymore.

VMware adds container support

Other big news in VMware NSX-T 2.x is native support for containers. This was a critical addition due to the undeniable ownership of containerization by the Docker-based infrastructure.

Along with VMware doubling down on BOSH/Pivotal as an orchestration platform, version 2.1 supports both Pivotal Cloud Foundry and Pivotal Container Service.

Extend on premises to the cloud

These developments feed into NSX Cloud, one of the VMware Cloud Services the company rolled out at VMworld in August 2017. NSX Cloud provides consistent networking and security for applications running in multiple private and public clouds via a single management console and common API. This is interesting, as this is a service no one else offers. It allows NSX to be expanded beyond the local borders of the infrastructure and allows the NSX domain to be expanded beyond the local network into major cloud providers. In other words, it expands on premises into the cloud. AWS is already supported and Azure support is on the roadmap. It brings such functionality as discovery.

Added content packs ease troubleshooting

Alongside this is the inclusion of Log Insight. Log Insight, as the name suggests, collects or logs key information from the NSX environment. “Great,” you might say. “So what?” Content packs are the answer. Content packs are add-ins that can be included in Log Insight and they help drill down and troubleshoot problems within the NSX environment. Don’t forget that we are talking about your network here; it may be virtual, but it’s still critical.

New VMware NSX-T load balancing feature

Finally, one major thing that came in 2.1 was NSX load balancing. Over time, it’s clear that many other features will be added to help NSX reach or exceed feature parity with other software load balancers.

What makes it even better is that VMware is very much pushing an API first environment. Infrastructure in code is where it’s at. The revised 2.0/2.1 API has been heavily reworked, making features easier to consume and access.