Tag Archives: than

For Sale – *NEW* ASUS P6X58D-E mobo, i7 960 (3.2Ghz) CPU & 24Gb Memory (6x4Gb 1333)

Wow, this sold far quicker than I thought, something tells me I undervalued the cost of this lot

Yes of course it includes the Intel CPU heatsink & fan along with the rear I/O shield.

Its been a while since I sold anything here, can I take tmknight’s offer for the asking price? or am I breaking any forum rules by doing this?

Thinking about this logically I should be asking ‘gamesaregood’ if he wants to match ‘tmknights’ offer, he did make an offer first?

Of course happy to receive any higher offers, but I am hedging my bets here

Thanks.

Go to Original Article
Author:

How to fortify your virtualized Active Directory design

Active Directory is much more than a simple server role. It has become the single sign-on source for most, if not all, of your data center applications and services. This access control covers workstation logins and extends to clouds and cloud services.

Since AD is such a key part of many organizations, it is critical that it is always available and has the resiliency and durability to match business needs. Microsoft had enough foresight to set up AD as a distributed platform that can continue to function — without much or, in some cases, no interruption in services — even if parts of the system went offline. This was helpful when AD nodes were still physical servers that were often spread across multiple racks or data centers to avoid downtime. So, the question now becomes, what’s the right way to virtualize Active Directory design?

Don’t defeat the native AD distributed abilities

Active Directory is a distributed platform, so virtualizing it will hinder the native distributed functionality of the software. AD nodes can be placed on different hosts and fail-over software will restart VMs if a host crashes, but what if your primary storage goes down? It’s one scenario you should not discount.

When you undertake the Active Directory design process for a virtualization platform, you must go beyond just a host failure and look at common infrastructure outages that can take out critical systems. One of the advantages of separate physical servers was the level of resiliency the arrangement provided. While we don’t want to abandon virtual servers, we must understand the limits and concerns associated with them and consider additional areas such as management clusters.

Management clusters are often slightly lower tier platforms — normally still virtualized — that only contain management servers, applications and infrastructure. This is where you would want to place a few AD nodes, so they are outside of the production environment they manage. The challenge with a virtualized management cluster is that it can’t be placed on the same physical storage location as production; this defeats the purpose of separation of duties. You can use more cost-effective storage platforms such as a virtual storage area network for shared storage or even local storage.

Remember, this is infrastructure and not core production, so IOPS should not be as much of an issue because the goal is resiliency, not performance. This means local drives and RAID groups should be able to provide the IOPS required.

How to keep AD running like clockwork

One of the issues with AD controllers in a virtualized environment is time drift.

All computers have clocks and proper timekeeping is critical to both the performance and security of the entire network. Most servers and workstations get their time from AD, which helps to keep everything in sync and avoids Kerberos security login errors.

These AD servers would usually get their time from a time source if they were physical or from the hosts if virtualized from them. The AD servers would then keep the time synchronized with the internal clock of the computer based on CPU cycles.

When you virtualize a server, it no longer has a set number of CPU cycles to base its time on. That means time can drift until it reaches out for an external time check to reset itself. But that time check can also be off since you might be unable to tell the passage of time until the next check, which compounds the issue. Time drift can become stuck in a nasty loop because the virtualization hosts often get their time from Active Directory.

Your environment needs an external time source that is not dependent on virtualization to keep things grounded. While internet time sources are tempting, having the infrastructure reach out for time checks might not be ideal. A core switch or other key piece of networking gear can offer a dependable time source that is unlikely to be affected by drift due to its hardware nature. You can then use this time source as the sync source for both the virtualization hosts and AD, so all systems are on the same time that comes from the same source.

Some people will insist on a single physical server in a virtualized data center for this reason. That’s an option, but one that is not usually needed. Virtualization isn’t something to avoid in Active Directory design, but it needs to be done with thought and planning to ensure the infrastructure can support the AD configuration. Management clusters are key to the separation of AD nodes and roles.

This does not mean that high availability (HA) rules for Hyper-V or VMware environments are not required. Both production and management environments should have HA rules to prevent AD servers from running on the same hosts.

Rules should be in place to ensure these servers restart first and have reserved resources for proper operations. Smart HA rules are easy to overlook as more AD controllers are added and the rules configuration is forgotten.

The goal is not to prevent outages from happening — that’s not possible. It is to have enough replicas and roles of AD in the right places so users won’t notice. You might scramble a little behind the scenes if a disruption happens, but that’s part of the job. The key is to keep customers moving along without them knowing about any of the issues happening in the background.

Go to Original Article
Author:

For Sale – Custom loop water cooled pc – i9 9900k, 2080ti, 32gb 3200mhz ram, 2tb nvme

Selling as only seems to be my work machine rather than playing games and creating content as intended

Built by myself in November 2019, machine is only a few months old.

Only the best components were chosen When this was built.

Machine runs at 5ghz on all cores and gpu never sees above 50c.

Motherboard – ASus maximus Code

Cpu – intel i9 9900k with ek water block

Gpu – msi ventus oc 2080ti with ek water block and nickel backplate

Ram- 32gb g skill royal silver 3200mhz

Nvme – 1tb wd black

Nvme – 1tb sabrent

Psu – Corsair 750 modular

Ek nickel fittings

Ek d5 stand alone pump

Phanteks reservoir

6 Thermaltake ring plus fans with controllers

2 360mm x 45mm alphacool radiators

Thermaltake acrylic tubes and liquid

Custom cables

I am based in Tadworth Surrey and the machine can be seen and inspected in person.

Go to Original Article
Author:

For Sale – Phanteks ITX Case & ThermalTake 730W Semi-Mod PSU – £59 delivered / 2 x Toshiba 1GB 7200 3.5″ SATA Drives – £25 delivered

The CPU is worth more than £70 IMHO. That puts it less than the Ryzen 3 3200G which is about £80 new and the 2400G is a much better processor all-round.

I have seen some go for that on eBay but you take you chances on there and there is no warranty, where as mine is covered until May 2020.

In the interest of striking up a deal I am willing drop to £225 delivered, but delivery will cost me at least £15 insured, so that’s really all the wiggle room I have with it on this occasion.

Jut to confirm, this is the same set of components I have been using for the last 8 months with no problems. (I am currently using it to type this reply on) so there is no issue with compatibility.

Go to Original Article
Author:

For Sale – Phanteks Enthoo ITX (Black / Red) & ThermalTake Smart SE 730W Semi Modular 80+ PSU – Reduced to £85 Delivered

The CPU is worth more than £70 IMHO. That puts it less than the Ryzen 3 3200G which is about £80 new and the 2400G is a much better processor all-round.

I have seen some go for that on eBay but you take you chances on there and there is no warranty, where as mine is covered until May 2020.

In the interest of striking up a deal I am willing drop to £225 delivered, but delivery will cost me at least £15 insured, so that’s really all the wiggle room I have with it on this occasion.

Jut to confirm, this is the same set of components I have been using for the last 8 months with no problems. (I am currently using it to type this reply on) so there is no issue with compatibility.

Go to Original Article
Author:

Proofpoint: Ransomware payments made in half of U.S. attacks

Ransomware payments to cybercriminals could soon become the rule rather than the exception, according to new research from Proofpoint.

Proofpoint’s sixth annual “State of the Phish” report, released Thursday, surveyed 600 working infosec professionals across seven countries: the U.S., Australia, France, Japan, the U.K., Spain and Germany. The report showed that 33% of global organizations infected with ransomware in 2019 opted to pay the ransom. In the U.S. alone, 51% of organizations that experienced a ransomware attack decided to pay the ransom, which was the highest percentage among the seven countries surveyed.

Gretel Egan, security awareness and training strategist at Proofpoint, said she wasn’t surprised that a third of survey respondents had made ransomware payments after being attacked. While law enforcement agencies and infosec vendors have consistently urged victims not to pay ransoms, she said she understood “the lure” such payments represent, especially for healthcare or critical infrastructure organizations.

“Often you see a hospital or a medical center having to completely shut down and turn patients away because life-saving services are not available,” she said. “Those organizations, in that moment, can look at a $20,000 ransom [demand] and say ‘I can be completely back online and running my business again very quickly’ as opposed to going through a relatively lengthy process even if they’re restoring from backups, which can take weeks to be fully operational again.”

Egan said that even when organizations do make ransomware payments, there are no guarantees. According to 2020 State of the Phish report, among the organizations that opted to pay the ransom, 22% never got access to their data and 9% were hit with additional ransomware attacks. Because this was the first time Proofpoint asked survey respondents about ransomware payments, the vendor couldn’t say whether the numbers represented an increase or decrease from 2018.

However, Egan said Proofpoint observed another concerning trend with ransomware attacks where threat actors exfiltrate organizations’ data before encrypting and then threaten to shame victims by making sensitive data public. “They’ll say ‘I’m going to share your information because you’re not going to pay me.’ It’s almost like doubling down on the blackmail,” Egan said. “I tell people there is no low that’s too low for [cybercriminals].”

Refusal to pay ransoms did not deter threat actors as 2019 saw a resurgence of ransomware attacks, according to Proofpoint’s report. Last year’s State of the Phish report showed just 10 percent of organizations experience a ransomware attack in 2018, as opposed to a whopping 65% in 2019.

“2018 was such a down year for ransomware in general, but it came storming back in 2019,” Egan said.

In addition to the survey, Proofpoint also analyzed more than 9 million suspicious emails reported by customers and an additional 50 million simulated phishing attacks sent by the vendor. Egan said the data showed phishing emails aren’t as big of a threat vector for ransomware attacks as in the past, which indicates cybercriminals are changing their strategies.

“We’re not seeing as many ransomware payloads delivered via e-mail,” she said. “From a threat level side, infections are coming in as secondary infections. There’s a system already compromised with malware and then threat actors take advantage of first level infiltration to then launch ransomware within the system.”

BEC on the rise

The report also found a significant rise in cybercriminals utilizing business email compromise (BEC) as a preferred attack. An alarming 86% of organizations surveyed by Proofpoint faced BEC attempts in 2019. Like ransomware payments, BEC attacks can result in millions of dollars in losses for organizations; 34% of respondents said they experienced financial losses or wire transfer fraud.

“There are many ways for attackers to benefit financially from initiating a BEC attack,” Egan said. “For example, the FBI has flagged cases of people going after W2 employee forms and using that to commit tax fraud. In many cases, BEC attacks are underreported because of the embarrassment and issue with having to admit you’ve been fooled.”

Egan said BEC attacks are typically successful because threat actors take their time and do their research, forging emails that appear innocuous to both the human eye and some email security products designed to detect such threats.

“Attacks like BEC are favorable for attackers because they don’t have malware or payload attachments. There are no dangerous links imbedded in them so it’s difficult for technical safeguards to stop and block them, particularly if you’re dealing with an account that’s been compromised,” she said. “Many of the emails are coming from a known and trusted account, or within an organization, or person-to-person from an account that’s been compromised. Attackers are switching to a more people-centric approach.”

The trend of more people-centric attacks led to 55% of organizations dealing with at least one successful phishing attack in 2019.

“Business email compromise is a longer-term kind of con,” Egan said. ” Threat actors don’t launch out of the gate asking for bank routing information. They establish a relationship over time to lull someone into believing they’re a trusted email account, so the user isn’t questioning it.”

Proofpoint said security awareness training is a method that saw success in combating such threats, with 78% of organizations reporting that training resulted in measurably lower phishing susceptibility. The report emphasized the importance of understanding who is being targeted, and more importantly, the types of attacks organizations are facing and will face, to reduce social engineering threats such as BEC and spear phishing emails.

Go to Original Article
Author:

IBM expands patent troll fight with its massive IP portfolio

After claiming more than a quarter century of patent leadership, IBM has expanded its fight against patent assertion entities, also known as patent trolls, by joining the LOT Network. As a founding member of the Open Invention Network in 2005, IBM has been in the patent troll fight for nearly 15 years.

The LOT Network (short for License on Transfer) is a nonprofit community of more than 600 companies that have banded together to protect themselves against patent trolls and their lawsuits. The group says companies lose up to $80 billion per year on patent troll litigation. Patent trolls are organizations that hoard patents and bring lawsuits against companies they accuse of infringing on those patents.

IBM joins the LOT Network after its $34 billion acquisition of Red Hat, which was a founding member of the organization.

“It made sense to align IBM’s and Red Hat’s view on how to manage our patent portfolio,” said Jason McGee, vice president and CTO of IBM Cloud Platform. “We want to make sure that patents are used for their traditional purposes, and that innovation proceeds and open source developers can work without the threat of a patent litigation.”

To that end, IBM contributed more than 80,000 patents and patent applications to the LOT Network to shield those patents from patent assertion entities, or PAEs.

Charles KingCharles King

IBM joining the LOT Network is significant for a couple of reasons, said Charles King, principal analyst at Pund-IT in Hayward, Calif. First and foremost, with 27 years of patent leadership, IBM brings a load of patent experience and a sizable portfolio of intellectual property (IP) to the LOT Network, he said.

“IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP,” King said. “Instead, the opposite appears to have occurred, with IBM taking heed of its new business unit’s dedication to open innovation and patent stewardship.”

IBM’s decision to join should also silence critics who decried how the company’s acquisition of Red Hat would erode and eventually end Red Hat’s long-standing leadership in open source and shared IP.
Charles KingAnalyst, Pund-IT

The LOT Network operates as a subscription service that charges members for the IP protection they provide. LOT’s subscription rates are based on company revenue. Membership is free for companies making less than $25 million annually. Companies with annual revenues between $25 million and $50 million pay $5,000 annually to LOT. Companies with revenues between $50 million and $100 million pay $10,000 annually to LOT. Companies with revenues between $100 million and $1 billion pay $15,000. And LOT caps its annual subscription rates at $20,000 for companies with revenues greater than $1 billion.

Meanwhile, the Open Invention Network (OIN) has three levels of participation: members, associate members and licensees. Participation in OIN is free, the organization said.

“One of the most powerful characteristics of the OIN community and its cross-license agreement is that the board members sign the exact same licensing agreement as the other 3,100 business participants,” said Keith Bergelt, CEO of OIN. “The cross license is royalty-free, meaning it costs nothing to join the OIN community. All an organization or business must agree to do is promise not to sue other community participants based on the Linux System Definition.”

IFI Claims Patent Services confirms that 2019 marked the 27th consecutive year in which IBM has been the leader in the patent industry, earning 9,262 U.S. patents last year. The patents reach across key technology areas such as AI, blockchain, cloud computing, quantum computing and security, McGee said.

IBM achieved more than 1,800 AI patents, including a patent for a method for teaching AI systems how to understand implications behind certain text or phrases of speech by analyzing other related content. IBM also gained patents for improving the security of blockchain networks.

In addition, IBM inventors were awarded more than 2,500 patents in cloud technology and grew the number of patents the company has in the nascent quantum computing field.

“We’re talking about new patent issues each year, not the size of our patent portfolio, because we’re focused on innovation,” McGee said. “There are lots of ways to gain and use patents, we got the most for 27 years and I think that’s a reflection of real innovation that’s happening.”

Since 1920, IBM has received more than 140,000 U.S. patents, he noted. In 2019, more than 8,500 IBM inventors, spanning 45 different U.S. states and 54 countries contributed to the patents awarded to IBM, McGee added.

In other patent-related news, Apple and Microsoft this week joined 35 companies who petitioned the European Union to strengthen its policy on patent trolls. The coalition of companies sent a letter to EU Commissioner for technology and industrial policy Thierry Breton seeking to make it harder for patent trolls to function in the EU.

Go to Original Article
Author:

Red Hat OpenShift Container Storage seeks to simplify Ceph

The first Red Hat OpenShift Container Storage release to use multiprotocol Ceph rather than the Gluster file system to store application data became generally available this week. The upgrade comes months after the original late-summer target date set by open source specialist Red Hat.

Red Hat — now owned by IBM — took extra time to incorporate feedback from OpenShift Container Storage (OCS) beta customers, according to Sudhir Prasad, director of product management in the company’s storage and hyper-converged business unit.

The new OCS 4.2 release includes Rook Operator-driven installation, configuration and management so developers won’t need special skills to use and manage storage services for Kubernetes-based containerized applications. They indicate the capacity they need, and OCS will provision the available storage for them, Prasad said.

Multi-cloud support

OCS 4.2 also includes multi-cloud support, through the integration of NooBaa gateway technology that Red Hat acquired in late 2018. NooBaa facilitates dynamic provisioning of object storage and gives developers consistent S3 API access regardless of the underlying infrastructure.

Prasad said applications become portable and can run anywhere, and NooBaa abstracts the storage, whether AWS S3 or any other S3-compatible cloud or on-premises object store. OCS 4.2 users can move data between cloud and on-premises systems without having to manually change configuration files, a Red Hat spokesman added.

Customers buy OCS to use with the Red Hat OpenShift Container Platform (OCP), and they can now manage and monitor the storage through the OCP console. Kubernetes-based OCP has more than 1,300 customers, and historically, about 40% to 50% attached to OpenShift Container Storage, a Red Hat spokesman said. OCS had about 400 customers in May 2019, at the time of the Red Hat Summit, according to Prasad.

One critical change for Red Hat OpenShift Container Storage customers is the switch from file-based Gluster to multiprotocol Ceph to better target data-intensive workloads such as artificial intelligence, machine learning and analytics. Prasad said Red Hat wanted to give customers a more complete platform with block, file and object storage that can scale higher than the product’s prior OpenStack S3 option. OCS 4.2 can support 5,000 persistent volumes and will support 10,000 in the upcoming 4.3 release, according to Prasad.

Migration is not simple

Although OCS 4 may offer important advantages, the migration will not be a trivial one for current customers. Red Hat provides a Cluster Application Migration tool to help them move applications and data from OCP 3/OCS 3 to OCP 4/OCS 4 at the same time. Users may need to buy new hardware, unless they can first reduce the number of nodes in their OpenShift cluster and use the nodes they free up, Prasad confirmed.

“It’s not that simple. I’ll be upfront,” Prasad said, commenting on the data migration and shift from Gluster-based OCS to Ceph-backed OCS. “You are moving from OCP 3 to OCP 4 also at the same time. It is work. There is no in-place migration.”

One reason that Red Hat put so much emphasis on usability in OCS 4.2 was to abstract away the complexity of Ceph. Prasad said Red Hat got feedback about Ceph being “kind of complicated,” so the engineering team focused on simplifying storage through the operator-driven installation, configuration and management.

“We wanted to get into that mode, just like on the cloud, where you can go and double-click on any service,” Prasad said. “That took longer than you would have expected. That was the major challenge for us.”

OpenShift Container Storage roadmap

The original OpenShift Container Storage 4.x roadmap that Red Hat laid out last May at its annual customer conference called for a beta release in June or July, OCS 4.2 general availability in August or September, and a 4.3 update in December 2019 or January 2020. Prasad said February is the new target for the OCS 4.3 release.

The OpenShift Container Platform 4.3 update became available this week, with new security capabilities such as Federal Information Processing Standard (FIPS)-compliant encryption. Red Hat eventually plans to return to its prior practice of synchronizing new OCP and OCS releases, said Irshad Raihan, the company’s director of storage product marketing.

The Red Hat OpenShift Container Storage 4.3 software will focus on giving customers greater flexibility, such as the ability to choose the type of disk they want, and additional hooks to optimize the storage. Prasad said Red Hat might need to push its previously announced bare-metal deployment support from OCS 4.3 to OCS 4.4.

OCS 4.2 supports converged-mode operation, with compute and storage running on the same node or in the same cluster. The future independent mode will let OpenShift use any storage backend that supports the Container Storage Interface. OCS software would facilitate access to the storage, whether it’s bare-metal servers, legacy systems or public cloud options.

Alternatives to Red Hat OpenShift Container Storage include software from startups Portworx, StorageOS, and MayaData, according to Henry Baltazar, storage research director at 451 Research. He said many traditional storage vendors have added container plugins to support Kubernetes. The public cloud could appeal to organizations that don’t want to buy and manage on-premises systems, Baltazar added.

Baltazar advised Red Hat customers moving from Gluster-based OCS to Ceph-based OCS to keep a backup copy of their data to restore in the event of a problem, as they would with any migration. He said any users who are moving a large data set to public cloud storage needs to factor in network bandwidth and migration time and consider egress changes if they need to bring the data back from the cloud.

Go to Original Article
Author:

SAP Data Hub opens predictive possibilities at Paul Hartmann

Organizations have access to more data than they’ve ever had, and the number of data sources and volume of data just keeps growing.

But how do companies deal with all the data and can they derive real business use from it? Paul Hartmann AG, a medical supply company, is trying to answer those questions by using SAP Data Hub to integrate data from different sources and use the data to improve supply chain operations. The technology is part of the company’s push toward a data-based digital transformation, where some existing processes are digitized and new analytics-based models are being developed.

The early results have been promising, said Sinanudin Omerhodzic, Paul Hartmann’s CIO and chief data officer.

Paul Hartmann is a 200-year-old firm in Heidenheim, Germany that supplies medical and personal hygiene products to customers such as hospitals, nursing homes, pharmacies and retail outlets. The main product groups include wound management, incontinence management and infection management.

Paul Hartmann is active in 35 countries and turns over around $2.2 billion in sales a year. Omerhodzic described the company as a pioneer in digitizing its supply chain operations, running SAP ERP systems for 40 years. However, changes in the healthcare industry have led to questions about how to use technology to address new challenges.

For example, an aging population increases demand for certain medical products and services, as people live longer and consume more products than before.

One prime area for digitization was in Paul Hartmann’s supply chain, as hospitals demand lower costs to order and receive medical products. Around 60% of Paul Hartmann’s orders are still handled by email, phone calls or fax, which means that per-order costs are high, so the company wanted to begin to automate these processes to reduce costs, Omerhodzic said.

One method was to install boxes stocked with products and equipped with sensors in hospital warehouses that automatically re-order products when stock reaches certain levels. This process reduced costs by not requiring any human intervention on the customer side. Paul Hartmann installed 9,000 replenishment boxes in about 100 hospitals in Spain, which proved adept at replacing stock when needed. But it then began to consider the next step: how to predict with greater accuracy what products will be needed when and where to further reduce the wait time on restocking supplies.  

Getting predictive needs new data sources

This new level of supply chain predictive analytics requires accessing and analyzing vast amounts of data from a variety of new sources, Omerhodzic said. For example, weather data could show that a storm may hit a particular area, which could result in more accidents, leading hospitals to stock more bandages in preparation. Data from social media sources that refer to health events such as flu epidemics could lead to calculations on the number of people who could get sick in particular regions and the number of products needed to fight the infections.

“All those external data sources — the population data, weather data, the epidemic data — combined with our sales history data, allow us to predict and forecast for the future how many products will be required in the hospitals and for all our customers,” Omerhodzic said.

Paul Hartmann worked with SAP to implement a predictive system based on SAP Data Hub, a software service that enables organizations to orchestrate data from different sources without having to extract the data from the source. AI and machine learning are used to analyze the data, including the entire history of the company’s sales data, and after just a few months of the pilot project was making better predictions than the sales staff, Omerhodzic said.

“We have 200 years selling our product, so the sales force has a huge wealth of information and experience, but the new system could predict even better than they could,” he said. “This was a huge wake up for us and we said we need to learn more about our data, we need to pull more data inside and see how that could improve or maybe create new business models. So we are now in the process of implementing that.”

Innovation on the edge less disruptive

The use of SAP Data Hub as an innovation center is one example of how SAP can foster digital transformation without directly changing core ERP systems, said Joshua Greenbaum, principal analyst at Enterprise Applications Consulting. This can result in new processes that aren’t as costly or disruptive as a major ERP upgrade.

Joshua GreenbaumJoshua Greenbaum

“Eventually this touches your ERP because you’re going to be making and distributing more bandages, but you can build the innovation layer without it being directly inside the ERP system,” Greenbaum said. “When I discuss digital transformation with companies, the easy wins don’t start with the statement, ‘Let’s replace our ERP system.’ That’s the road to complexity and high costs — although, ultimately, that may have to happen.”

For most organizations, Greenbaum said, change management — not technology — is still the biggest challenge of any digital transformation effort.

Change management challenges

At Paul Hartmann, change management has been a pain point. The company is addressing the technical issues of the SAP Data Hub initiative through education and training programs that enhance IT skills, Omerhodzic said, but getting the company to work with data is another matter.

“The biggest change in our organization is to think more from the data perspective side and the projects that we have today,” he said. “To have this mindset and understanding of what can be done with the data requires a completely different approach and different skills in the business and IT. We are still in the process of learning and establishing the appropriate organization.”

Although the sales organization at Paul Hartmann may feel threatened by the predictive abilities of the new system, change is inevitable and affects the entire organization, and the change must be managed from the top, according to Omerhodzic.

“Whenever you have a change there’s always fear from all people that are affected by it,” he said. “We will still need our sales force in the future — but maybe to sell customer solutions, not the products. You have to explain it to people and you have to explain to them where their future could be.”

Go to Original Article
Author: