Tag Archives: Advanced

Cisco looks to close gaps in Webex Teams video conferencing

Cisco has promised to bring more advanced video conferencing features to Webex Teams eventually. But for now, users must rely on the vendor’s Webex Meetings product for full-featured video calling.

Cisco has been working for years to bring the two apps closer together. But despite relying on the same cloud infrastructure, Teams still lags behind its collaboration cousin.

Webex Teams lacks polls, in-meeting chat, screen-sharing with remote desktop control, 5×5 video displays and key host settings like the ability to automatically mute attendees upon entry.

What’s more, Webex Teams users cannot access essential video conferencing features without a license for Webex Meetings. Those capabilities include meeting recording, guest access and dial-in numbers.

Despite marking Webex Teams as an all-in-one collaboration app, Cisco generally sells the product in a bundle with Webex Meetings.

“We are actively working to bring all the advanced video conferencing capabilities of the Webex Meetings to Webex Teams,” Cisco said in an emailed statement.

Later this month, Cisco plans to address one significant shortcoming in Webex Teams by expanding the product’s video display. The app will soon support a 3×3 video grid. But it will still show fewer video panels than Webex Meetings, which has a 5×5 array.

Webex Teams vs. Microsoft Teams
Webex Teams vs. Microsoft Teams

Demand for large group video meetings has soared amid the coronavirus pandemic. People want to be able to see everyone on screen at the same time. Some customers have chosen a video platform based solely on this issue. Cisco did not say when it would enable a 5×5 grid view in Webex Teams.

Another feature missing from Webex Teams is a “health checker” button, like the one in Webex Meetings for troubleshooting connectivity issues. Furthermore, the video interfaces of Webex Teams and Webex Meetings are not identical, which could confuse users who host meetings in both.

Cisco launched Webex Teams as Cisco Spark in 2015. The app initially relied on a separate cloud engine than Webex Meetings. The company later rebranded the product as part of a broader strategy to streamline its portfolio of communications apps.

Unlike competitors Microsoft and Slack, Cisco has not disclosed how many people use its team collaboration app. However, the company said 324 million people attended a Webex meeting in March.

“Obviously, it’s been a work in progress from the Webex Teams side for a couple of years now,” said Josh Warcop, senior solutions engineer at Byteworks, a Cisco reseller. “We’re probably going to see a lot more feature parity here just this year.”

On the flip side, Cisco said it was also working to bring at least two Webex Teams video features to Webex Meetings. One is the ability for anyone to start a meeting, not just the host. The other is the integration of Meetings with video mesh nodes, which let businesses keep some video traffic on premises.

Go to Original Article

AI in mining takes root in the industry

The mining industry has used technologies such as advanced machinery, satellite imagery and hypersensitive measurement tools. However, the industry is just beginning to use AI in mining, which has the potential to save workers time and companies money.

Geospatial analysis and data science vendor Descartes Lab has many customers in the mining sector, with a few packaged products specifically for customers in that area. Based in Santa Fe, N.M., the 2014 startup spun out of Los Alamos National Laboratory, a U.S. Department of Energy weapons research center.

The mining sector is in the early stages of using AI technologies, said James Orsulak, senior director of enterprise sales at Descartes Labs. Still, he said, almost all of the company’s clients have small data science teams made up of highly skilled experts.

“We’re seeing a transition where there are more former geologists who went back to school to get a data science degree,” Orsulak said.

Astral imagery

The Descartes Labs platforms for mining companies combine data sets from NASA’s Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), an advanced imaging instrument on the Terra satellite, with AI and analytics.

Vendors like Descartes Labs sell AI in mining technology.
Vendors like Descartes Labs sell AI in mining technology.

Descartes Labs ingested the entire dataset from ASTER, a process that took many CPU hours, Orsulak said. Using machine learning, Descartes Labs then removed all the structures, clouds and snow from the satellite images, leaving only a bare earth model.

We’re seeing a transition where there are more former geologists who went back to school to get a data science degree.
James OrsulakSenior director of enterprise sales, Descartes Labs

Mining clients then combine their data with the platform and layer in other types of data on the model, including mineral classification, geochemistry and geophysics data.

The platform, among other things, can be used to find new mineral deposits with machine learning, as  it can use data on known deposits to  find similar, previously unknown deposits.

Manually, that can take months or years, said Lori Wickert, a geologist and principal remote sensing consultant at Newmont Corporation, a gold mining company. 

“Working with the Descartes platform is providing an opportunity to shortcut that process in a major way,” Wickert said, adding that the software has saved her countless hours of manual work.

Another style

Meanwhile, Kespry, an industrial drone software and analytics vendor, also focuses on the mining sector, but with a slightly different approach.

The 2013 startup, based in Menlo Park, Calif., uses industrial drone imagery to fly over mining sites for mine planning and inventory management, said George Mathew, CEO and chairman of Kespry.

Using drone imagery either collected from its own drones or mining industry customers, along with its data science platform, Kespry maps daily topography changes in active areas, identify slope stability, identify draining patterns and more.

The company can also use the imagery and platform to automatically measure stockpile volumes of mined materials.

For mining companies and other industrial businesses that aren’t yet using AI and machine learning, the time to start is now, Mathew said.  

“The companies that end up making those investments now, they end up with a head start,” he said.

Go to Original Article

Deepfakes: Security experts undecided on the threat level

Deepfake technology has advanced at a rapid pace, but the infosec community is still undecided about how much of a threat deepfakes represent.

Many are familiar with deepfakes in their video and image form, where machine learning technology generates a celebrity saying something they didn’t say or putting a different celebrity in their place. However, deepfakes can also appear in audio and even text-based forms. Several sessions at RSA Conference 2020 examined how convincing these fakes can be, as well as technical approaches to refute them. But so far, threat researchers are unsure if deepfakes have been used for cyberattacks in the wild.

In order to explore the potential risk of deepfakes, SearchSecurity asked a number of experts about the threat deepfakes pose to society. In other words, should we be worried about deepfakes?

There was a clear divide in the responses between those who see deepfakes as a real threat and those who were more lukewarm on the idea.

Concern about deepfakes

Some security experts at RSA Conference 2020 feared that deepfakes would be used as part of disinformation campaigns in U.S. elections. McAfee senior principal engineer and chief data scientist Celeste Fralick said that with the political climate being the way it is around the world, deepfakes are absolutely something that we should be worried about.”

Fralick cited a demonstration of deepfake technology during an RSAC session presented by Sherin Mathews, senior data scientist at McAfee, and Amanda House, data scientist at McAfee.

“We have a number of examples, like Bill Hader morphing into Tom Cruise and morphing back. I never realized they looked alike, but when you see the video you can see them morph. So certainly in this political climate I think that it’s something to be worried about. Are we looking at the real thing?”

Jake Olcott, BitSight’s vice president of communications and government affairs, agreed, saying that deepfakes are “a huge threat to democracy.” He notes that the platforms that own the distribution of content, like social media sites, are doing very little to stop the spread of misinformation.

“I’m concerned that because the fakes are so good, people are either not interested in distinguishing between what’s true and what’s not, but also that the malicious actors, they recognize that there’s sort of just like a weak spot and they want to just continue to pump this stuff out.”

CrowdStrike CTO Mike Sentonas made the point that they’re getting harder to spot and easier to create.

“I think it’s something we’ll more and more have to deal with as a community.”

Deepfake threats aren’t pressing

Other security experts such as Patrick Sullivan, Akamai CTO of security strategy, weren’t as concerned about the potential use of deepfakes in cyberattacks.

“I don’t know if we should be worrying. I think people should be educated. We live in a democracy, and part of that is you have to educate yourself on things that can influence you as someone who lives in a democracy,” Sullivan said. “I think people are much smarter about the ways someone may try to divide online, how bots are able to amplify a message, and I think the next thing people need to get their arms around is video, which has always been an unquestionable point of data, which you may have to be more skeptical about.”

Malwarebytes Labs director Adam Kujawa said that while he’s not so worried about the ever-publicized deepfake videos, he does show concern with deepfake text and systems that automatically predict or create text based on a user’s input.

“I see as being pretty dangerous because if you utilize that with limited input derived from social media accounts, anything you want to create a pretty convincing spear phishing email, almost on the fly.”

That said, he echoed Sullivan’s point that people are generally able to spot when something is obviously not real.

“They are getting better [however], and we need to develop technology that can identify these things you and I won’t be able to, because eventually that’s going to happen,” Kujawa said.

Greg Young, Trend Micro’s vice president of cybersecurity, went as far as to call deepfakes “not a big deal.”

However, he added, ” I think where it’s going to be used is business email compromise where you try to get a CEO or CFO to send you a Western Union payment. So if I can imitate that person’s voice, deepfake for voice alone would be very useful because I can tell the CFO to do this thing if I’m the person pretending to be the CEO, and they’re going to do it. We don’t leave video messages today, so the video side I’m less concerned about. I think deepfakes will be used more in disinformation campaigns. We’ve already seen some of that today.”

Go to Original Article

Oracle Cloud Infrastructure updates hone in on security

SAN FRANCISCO — Oracle hopes a focus on advanced security can help its market-lagging IaaS gain ground against the likes of AWS, Microsoft and Google.

A new feature called Maximum Security Zones lets customers denote enclaves within their Oracle Cloud Infrastructure (OCI) environments that have all security measures turned on by default. Resources within the zones are limited to configurations that are known to be secure. The system will also prevent alterations to configurations and provide continuous monitoring and defenses against anomalies, Oracle said on the opening day of its OpenWorld conference.

Through Maximum Security Zones, customers “will be better protected from the consequences of misconfigurations than they are in other cloud environments today,” Oracle said in an obvious allusion to recent data breaches, such as the Capital One-AWS hack, which have been blamed on misconfigured systems that gave intruders a way in.

“Ultimately, our goal is to deliver to you a fully autonomous cloud,” said Oracle executive chairman and CTO Larry Ellison, during a keynote. 

“If you spend the night drinking and get into your Ford F-150 and crash it, that’s not Ford’s problem,” he said. “If you get into an autonomous Tesla, it should get you home safely.”

Oracle wants to differentiate itself and OCI from AWS, which consistently promotes a shared responsibility model for security between itself and customers. “We’re trying to leapfrog that construct,” said Vinay Kumar, vice president of product management for Oracle Cloud Infrastructure.

“The cloud has always been about, you have to bring your own expertise and architecture to get this right,” said Leo Leung, senior director of products and strategy at OCI. “Think about this as a best-practice deployment automatically. … We’re going to turn all the security on and let the customer decide what is ultimately right for them.”

Security is too important to rely solely on human effort.
Holger MuellerVice president and principal analyst, Constellation Research.

Oracle’s Autonomous Database, which is expected to be a big focal point at this year’s OpenWorld, will benefit from a new service called Oracle Data Safe. This provides a set of controls for securing the database beyond built-in features such as always-on encryption and will be included as part of the cost of Oracle Database Cloud services, according to a statement.

Finally, Oracle announced Cloud Guard, which it says can spot threats and misconfigurations and “hunt down and kill” them automatically. It wasn’t immediately clear whether Cloud Guard is a homegrown Oracle product or made by a third-party vendor. Security vendor Check Point offers an IaaS security product called CloudGuard for use with OCI.

Starting in 2017, Oracle began to talk up new autonomous management and security features for its database, and the OpenWorld announcements repeat that mantra, said Holger Mueller, an analyst at Constellation Research in Cupertino, Calif. “Security is too important to rely solely on human effort,” he said.

OCI expansions target disaster recovery, compliance

Oracle also said it will broadly expand OCI’s global cloud footprint, with the launch of 20 new regions by the end of next year. The rollout will bring Oracle’s region count to 36, spread across North America, Europe, South America, the Middle East, Asia-Pacific, India and Australia.

This expansion will add multiple regions in certain geographies, allowing for localized disaster recovery scenarios as well as improved regulatory compliance around data location. Oracle plans to add multi-region support in every country it offers OCI and claimed this approach is superior to the practice of including multiple availability zones in a single region.

Oracle’s recently announced cloud interoperability partnership with Microsoft is also getting a boost. The interconnect that ties together OCI and Azure, now available in Virginia and London, will also be offered in the Western U.S., Asia and Europe over the next nine months, according to a statement. In most cases, Oracle is leasing data center space from providers such as Equinix, according to Kumar.

Holger MuellerHolger Mueller

SaaS vendors are another key customer target for Oracle with OCI. To that end, it announced new integrated third-party billing capabilities for the OCI software marketplace released earlier this year. Oracle also cited SaaS providers who are taking advantage of Oracle Cloud Infrastructure for their own underlying infrastructure, including McAfee and Cisco.

There’s something of value for enterprise customers in OCI attracting more independent software vendors, an area where Oracle also lags against the likes of AWS, Microsoft and Google, according to Mueller.

“In contrast to enterprises, they bring a lot of workloads, often to be transferred from on-premises or even other clouds to their preferred vendor,” he said. “For the IaaS vendor, that means a lot of scale, in a market that lives by economies of scale: More workloads means lower prices.”

Go to Original Article

Data ethics issues create minefields for analytics teams

GRANTS PASS, Ore. — AI technologies and other advanced analytics tools make it easier for data analysts to uncover potentially valuable information on customers, patients and other people. But, too often, consultant Donald Farmer said, organizations don’t ask themselves a basic ethical question before launching an analytics project: Should we?

In the age of GDPR and like-minded privacy laws, though, ignoring data ethics isn’t a good business practice for companies, Farmer warned in a roundtable discussion he led at the 2019 Pacific Northwest BI & Analytics Summit. IT and analytics teams need to be guided by a framework of ethics rules and motivated by management to put those rules into practice, he said.

Otherwise, a company runs the risk of crossing the line in mining and using personal data — and, typically, not as the result of a nefarious plan to do so, according to Farmer, principal of analytics consultancy TreeHive Strategy in Woodinville, Wash. “It’s not that most people are devious — they’re just led blindly into things,” he said, adding that analytics applications often have “unforeseen consequences.”

For example, he noted that smart TVs connected to home networks can monitor whether people watch the ads in shows they’ve recorded and then go to an advertiser’s website. But acting on that information for marketing purposes might strike some prospective customers as creepy, he said.

Shawn Rogers, senior director of analytic strategy and communications-related functions at vendor Tibco Software Inc., pointed to a trial program that retailer Nordstrom launched in 2012 to track the movements of shoppers in its stores via the Wi-Fi signals from their cell phones. Customers complained about the practice after Nordstrom disclosed what it was doing, and the company stopped the tracking in 2013.

“I think transparency, permission and context are important in this area,” Rogers said during the session on data ethics at the summit, an annual event that brings together a small group of consultants and vendor executives to discuss BI, analytics and data management trends.

AI algorithms add new ethical questions

Being transparent about the use of analytics data is further complicated now by the growing adoption of AI tools and machine learning algorithms, Farmer and other participants said. Increasingly, companies are augmenting — or replacing — human involvement in the analytics process with “algorithmic engagement,” as Farmer put it. But automated algorithms are often a black box to users.

Mike Ferguson, managing director of U.K.-based consulting firm Intelligent Business Strategies Ltd., said the legal department at a financial services company he works with killed a project aimed at automating the loan approval process because the data scientists who developed the deep learning models to do the analytics couldn’t fully explain how the models worked.

We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.
Mike FergusonManaging director, Intelligent Business Strategies Ltd.

And that isn’t an isolated incident in Ferguson’s experience. “There’s a loggerheads battle going on now in organizations between the legal and data science teams,” he said, adding that the specter of hefty fines for GDPR violations is spurring corporate lawyers to vet analytics applications more closely. As a result, data scientists are focusing more on explainable AI to try to justify the use of algorithms, he said.

The increased vetting is driven more by legal concerns than data ethics issues per se, Ferguson said in an interview after the session. But he thinks that the two are intertwined and that the ability of analytics teams to get unfettered access to data sets is increasingly in question for both legal and ethical reasons.

“It’s pretty clear that legal is throwing their weight around on data governance,” he said. “We’ve gone from a bottom-up approach of everybody grabbing data and doing something with it to more of a top-down approach.”

Jill Dyché, an independent consultant who’s based in Los Angeles, said she expects explainable AI to become “less of an option and more of a mandate” in organizations over the next 12 months.

Code of ethics not enough on data analytics

Staying on the right side of the data ethics line takes more than publishing a corporate code of ethics for employees to follow, Farmer said. He cited Enron’s 64-page ethics code, which didn’t stop the energy company from engaging in the infamous accounting fraud scheme that led to bankruptcy and the sale of its assets. Similarly, he sees such codes having little effect in preventing ethical missteps on analytics.

“Just having a code of ethics does absolutely nothing,” Farmer said. “It might even get in the way of good ethical practices, because people just point to it [and say], ‘We’ve got that covered.'”

Instead, he recommended that IT and analytics managers take a rules-based approach to data ethics that can be applied to all three phases of analytics projects: the upfront research process, design and development of analytics applications, and deployment and use of the applications.

Go to Original Article

For Sale – Meraki Bundle (MX64, MS120-8LP, MR33, 100 MDM licences) 3 year licence

Brand new and still boxed with unclaimed license key for 3 years.

  • 1x Meraki MX64 Advanced Security License and support 3years
  • 1x Meraki MR Enterprise License 3 years
  • 1x Meraki MS120-8LP Enterprise License and support 3 years
  • 2x Meraki AC Power Cords UK

Due to the cost and size of the item I would rather the buyer comes and collects in person. The license has not been claimed, so it can be added to an existing or new meraki registration.

I will add pictures tonighttomorrow but it is all still boxed.

Price and currency: 1200.00
Delivery: Goods must be exchanged in person
Payment method: Cash on collection
Location: Hampshire
Advertised elsewhere?: Advertised elsewhere
Prefer goods collected?: I prefer the goods to be collected

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

2018 MIT Sloan CIO Symposium: A SearchCIO guide


Today’s enterprise can be divided into two groups: the departments that are acquiring advanced digital capabilities and those that are lagging behind. This bifurcation of digital prowess was evident at the 2018 MIT Sloan CIO Symposium, where we asked CIOs and digital experts to expound on the factors driving digitalization at enterprises and the barriers holding them back. Not surprisingly, the departments that are customer-facing, such as marketing, are leading the digital transformation charge.

While the transition to a digitalized enterprise is happening at varied speeds for most companies, the need to develop a viable digital business mode is universally recognized. Indeed, this year’s event was all about taking action — it is no longer enough just to have a vision for digital transformation, and the conference underscored that: sessions featured leading CIOs, IT practitioners, consultants and academics from across the globe dispensing hard-won advice on methods for planning and executing a future-forward digital transformation strategy.

In this SearchCIO conference guide, experience the 2018 MIT Sloan CIO Symposium by delving into our comprehensive coverage. Topics include building an intelligent enterprise, talent recruitment, the expanding CIO role and integration of emerging technologies like AI, machine learning, cloud and more.

To view our complete collection of video interviews filmed at this year’s event, please see our video guide: “MIT CIO 2018 videos: Honing a digital leadership strategy.”

1Thriving in a digital economy

Digital transformation strategy and advice

Implementing a digital transformation strategy requires a clear set of objectives, IT-business alignment, recruitment of the right talent, self-disruption and building what experts call an “intelligent enterprise,” among other things. In this section, the pros discuss the intricacies of leading the digital transformation charge.

2Technology transformation

Utilizing emergent tech like AI, machine learning and cloud

Every digital transformation requires a future-forward vision that takes advantage of up-and-coming tools and technologies. In this section, academics and IT executives discuss the enterprise challenges, benefits, questions and wide-ranging potential that AI, machine learning, edge computing, big data and more bring to the enterprise.

3Evolving CIO role

The CIO’s ever-expanding role in a digital world

Digital transformation not only brings with it new technologies and processes, it also brings new dimensions and responsibilities to the CIO role. In this section, CIOs and IT executives detail the CIO’s place in an increasingly digital, threat-laden and customer-driven world and offer timely advice for staying on top of it all.


Interviews filmed on site

During the 2018 MIT Sloan CIO Symposium, SearchCIO staff had the pleasure of conducting several one-on-one video interviews with consultants and IT executives on the MIT campus in Cambridge, Mass. Below is a sampling of the videos.

A link to our full collection of videos filmed at the 2018 MIT Sloan CIO Symposium can be found at the top of this guide.

Driving digital transformation success with Agile

In this SearchCIO video, Bharti Airtel’s CIO Mehta, MIT Sloan CIO Leadership Award winner, explains why implementing Agile methodologies can help organizations scale their digital transformation projects.

For Sale – Bundle for Sale (AMD FX 8320 Cpu-Board-16gb Ram-Cooler) + Modular Psu

CPU: – AMD FX 8320 Processor – Black Edition – 8 Core

Cooler :
be quiet Dark Rock C1 Advanced Intel/AMD CPU Air Cooler

Motherboard : – Asus M5A97 Version 2

RAM : – 2 packs of Corsair 8gb (2x4gb) Ddr3 1600mhz Low Profile Vengeance Memory Kit White Colour Cl9 1.35v (so there are 4 sticks in total)

£150 delivered
PSU : –
EVGA SuperNOVA 550 G2, 80+ GOLD 550W, Fully Modular

£40 delivered

all parts in excellent condition out a pc i build begining of the year

would consider splitting if i got buyers for all parts

open to options

Price and currency: £150 , £40
Delivery: Delivery cost is included within my country
Payment method: paypal gift or bt
Location: london
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.

MU-MIMO technology boosts system capacity for WLANs

It was legendary science-fiction writer Arthur C. Clarke who wrote, “Any sufficiently advanced technology is indistinguishable from magic.”

Many would put wireless LAN in the magical category. And if that’s the case, multiple input, multiple output (MIMO) and, most recently, multiuser MIMO (MU-MIMO) technology would really have to be the rabbits in the hat. Even those of us who’ve spent the majority of our careers in wireless networking are amazed at what those technologies can do.

MIMO’s been with us since 802.11n, and it unlocked the amazing performance improvements that extend into the current era. Yet, MIMO is still little understood, even among many network engineers. No surprise there, as MIMO seems to violate many of the laws that govern communications theory.

In a nutshell, MIMO technology takes two-dimensional radio waves — you can think of them as having just frequency and amplitude, which are the variables that we use to modulate a carrier signal, thus embossing the information we wish to transmit on this simple structure — and makes them three-dimensional. The third dimension is space, and that’s why MIMO is often referred to as spatial multiplexing. More dimensions result in a greater capacity to carry information. That’s why MIMO yields such amazing results, without violating any physical laws whatsoever.

Overcoming the wireless interference problem

But there’s more to the performance of wireless communications than simply cramming more bits into a given transmission cycle. Wireless has historically been a serial-access medium: Only one transmitter can be active in any given location at a particular frequency at any given moment in time. Two transmitters in close proximity attempting simultaneous transmission will likely cause mutual interference and at least some degradation to overall system capacity. While licensed radio services, like cellular, can schedule transmissions to avoid this problem, the unlicensed bands have no such controls; stations cannot coordinate with one another and must accept as reality any interference they encounter.

Now, several stations can receive the data they need with less overall waiting. System capacity goes up.

In response, the wireless industry developed many techniques to deal with the potential damage inherent in interference, mostly related to how information is modulated and coded before it’s sent over the air. But a large problem remains: Only one station can transmit at any moment in time. Until recently, this has meant only one receiving client station could be served — again, at any moment in time — and any others desiring communication would have to wait. The result: Overall system capacity was limited.

And that’s just the problem MU-MIMO technology solves. In yet another incarnation of magic, MU-MIMO enables an access point to transmit — simultaneously — to multiple clients in one transmit cycle, with each client receiving a unique data stream from the others also sent at the same time. Now, several stations can receive the data they need with less overall waiting. System capacity goes up. And more users go home — or, better said, stay at work — happy.

802.11ax standard introduces new flavor of MU-MIMO technology

Now, how this technique is actually implemented is so complex that the math involved would make great bedtime reading. Suffice it to say, MU-MIMO technology works very, very well — Farpoint Group’s own testing shows performance gains close to the theoretical maximum. Your mileage will likely vary, but the potential here is enormous. The upcoming 802.11ax standard is expected to add bi-directional MU-MIMO, meaning stations will be able to transmit simultaneously to an access point. Now, we’re talking real magic.

Farpoint Group recommends new Wi-Fi infrastructure and client purchases specify support for MU-MIMO — yes, you’ll need new gear on both ends; field upgrades are not possible in most cases. But you’ll be glad you specified this requirement. Of course, MU-MIMO technology isn’t really magic, but it certainly looks that way.

GPU implementation is about more than deep learning

When you consider a typical GPU implementation, you probably think of some advanced AI application. But that’s not the only place businesses are putting the chips to work.

“[GPUs] are obviously applicable for Google and Facebook and companies doing AI. But for startups like ours that have to justify capital spend in today’s business value, we still want that speed,” said Kyle Hubert, CTO at Simulmedia Inc.

The New York-based advertising technology company is using GPUs to make fairly traditional processes, like data reporting and business intelligence dashboards, work faster. Using a platform from MapD, Simulmedia has built a reporting and data querying tool that lets sales staff and others in the organization visualize how certain television ads are performing and answer any client inquiries as they come in.

Using GPUs for more than deep learning

Kyle Hubert, CTO, SimulmediaKyle Hubert

GPU technology is getting lots of attention today, primarily due to how businesses are using it. The chips power the training underlying some of the most advanced AI use cases, like image recognition, natural language translation and self-driving cars. But, of course, they were originally built to power video game graphics. Their main appeal is speedy processing power. And while that may be crucial for enabling neural networks to churn through millions of training examples, there are also other use cases in which the speed that comes from a GPU implementation is beneficial.

Simulmedia, founded in 2008, helps clients better target advertising on television networks. Initially, the team used spreadsheets to track metrics on how clients’ advertisements performed. But the data was too large — Simulmedia uses a combination of Nielsen and Experian data sets to target ads and assess effectiveness — and the visualization options were too limited.

Reports had to be built by the operations team, and there was little capability to do ad hoc queries. The MapD tool enables sales and product management teams to view data visualization reports and to do their own queries using a graphical interface or through SQL code.

Business focus pays off in GPU experience

There’s a lot of implicit knowledge that’s required to get GPUs up and running.
Kyle HubertCTO, Simulmedia

Some benefits of a GPU implementation focused on a standard business process go beyond simply speeding up that process. Hubert said it also prepares the business to implement the chips in a more pervasive way and prepares for a more AI-driven future.

He said the process of predicting which ads will perform best during particular time slots and on certain networks is heavy on data science. Simulmedia is looking at adding deep learning to its targeting, and these models will train on GPUs. Hubert said starting with GPUs in a standard business application has helped the team build a solid foundation on which to build out more GPU capability.

“There’s a lot of implicit knowledge that’s required to get GPUs up and running,” he said.

Aside from building institutional knowledge around how GPUs work, starting by applying the chips to more traditional use cases also helps to justify the cost, which can be substantial.

“They’re costly when you say, ‘I want a bunch of GPUs, and I don’t know what kind of results I’m going to get,'” Hubert said. “That’s a lot of capital investment when you don’t know your returns. When you do a dual-track approach, you can say, ‘I can get these GPUs, set them up for business users now, and I have a concrete ability to get immediate gratification. Then, I can carve out some of that to be future-looking.'”