Tag Archives: years

AI in e-commerce helps product sales

Over the last few years, e-commerce companies have made buying and selling items online easier by using AI. EBay, one of the largest e-commerce companies in the world, uses computer vision, natural language processing, machine learning and deep learning to help users.

EBay has invested heavily in developing and deploying AI capabilities. While it doesn’t necessarily do anything unique — competitors including Wayfair and Amazon have developed similar AI in e-commerce tools — what it does appears to benefit sellers and buyers on its platform, which differs markedly from its biggest competitors in being auction-driven and oriented primarily toward sellers.

EBay provides several tools for images, including a search by image function and photo cleanup.

Image recognition

Using the mobile eBay application, buyers and sellers can take a photo of an object, which, using computer vision and deep learning, eBay matches with similar images on its platform. The feature has been available since 2017, and has since been improved as more images have been uploaded for the machine learning algorithms to train on.  

Comparable features are available on a number of other platforms, including Google and Amazon. These platforms also have object recognition, enabling users to take a photo of something and see comparable items.

By also considering product descriptions as well, the search function optimizes accuracy. Sellers are able to get automatic pricing recommendations, although that wasn’t always so.

EBay screenshot
EBay uses AI to automatically identify images and to do image cleanup

“Historically we did a really bad job with [pricing models],” said Scot Hamilton, vice president of engineering.

EBay has a lot of unique inventory, Hamilton explained, making it difficult to find true peers to benchmark against for some objects.

Looking at characterizes such as the images, price range, descriptors and titles of the listed object, and by comparing it to similar objects, among other things, eBay attempts to automatically determine a relative price for an object.

The suggested price is generally slightly lower than the market average to keep inventory moving, Hamilton said. Casual and hobby sellers adopt the suggested price point around 80% of the time, he said.

AI in e-commerce

The platform also boasts an image cleanup capability for sellers. The feature, still in beta, takes an image and tries to automatically separate the featured object from visual clutter in the background.  

“Search engines these days require, in many cases, a white, clean background on photos,” said Harry Temkin, vice president of seller experience.

Sellers, he continued, “often take pictures in very interesting places,” like on the stairs, in a kitchen or in a garage.

The beta feature crops the featured item automatically from the photo. Now, manual input is still required in many cases, with users having to swipe around the edges of an object. However, the feature is getting sharper, Temkin said.

It is software that is continuously learning.
Harry TemkinVice president of seller experience, eBay

“It is software that is continuously learning,” he said. The more photos that go through it, the better it will work.

Besides its image features, eBay provides home-grown automatic translation, enabling buyers and sellers in different countries to see listings in their own languages.

The translation happens behind the scenes, Hamilton said, with users not necessarily realizing it’s even happening.

According to Hamilton, eBay’s model is 5% or 6% more accurate than off-the-shelf products.

“Being a global platform … not everyone speaks English,” Temkin said. “Being able to use machine translation to convert an English listing into a German listing or a Spanish listing or a French listing is useful.”

Go to Original Article
Author:

AIOps exec bets on incident response market shakeup

AIOps and IT automation have been hot topics in IT for about three years, but the ultimate vision of hands-off incident response has yet to be realized in most IT shops, says Vijay Kurkal, who was appointed CEO of Resolve Systems on Jan. 16. Kurkal had served as chief operating officer for the company since 2018.

Kurkal’s key priority in the first quarter of 2020 is the release of Resolve Insights, a platform that folds AIOps IP from the company’s August 2019 acquisition of FixStream into its IT automation software. While enterprise IT pros have been slow to trust such systems — which rely on AI and machine learning data analytics to automate common tasks such as server restarts — they have begun to find their way into production use at mainstream companies.

Vijay KurkalVijay Kurkal

Resolve faces a crowded field of competition that includes vendors with backgrounds in IT monitoring, incident response and data analytics. SearchITOperations.com had a conversation with Kurkal this week about how the company plans to hold its own in this volatile market.

Your product pitch sounds familiar to me. I’m sure I don’t have to tell you there are many vendors out there pursuing a vision of proactive IT automation assisted by AI. How will Resolve and FixStream be different?

Vijay Kurkal: There are two ecosystems we’re playing with. There are application monitoring tools like AppDynamics, Dynatrace, New Relic, etc. The users that they are going after are the application operations team. FixStream is complimentary to them. But they have limited visibility into hypervisors and deep into the network infrastructure. FixStream builds out a visual topology of every single infrastructure device that a particular application is touching, and all the events are overlaid on that. It’s [built] for the IT operation teams that are supporting critical applications.

Some of the other AIOps vendors using AI technologies, they have tons of algorithms, but any algorithm is only as good as the source data. It’s a garbage in, garbage out. Our starting point is always around relationship dependency mapping and getting data in context, and prioritizing what to act on. A second differentiator is that AI/ML algorithms are all based on a probabilistic model. [They] say what they believe are the potential root causes [of an issue], but they can’t say that with certainty. Where we’re taking it is, as soon as those events trigger an alert from FixStream, Resolve automates diagnostics. Typically, that requires a network engineer. We’re already trying this out with some pilot customers and by end of Q1 are going to have a product there. Most AIOps companies identify events; they don’t resolve them.

Most AIOps companies identify events; they don’t resolve them.
Vijay KurkalCEO, Resolve Systems

Is there a plan for IT automation beyond diagnostics?

Kurkal: The next step, and I don’t think most customers are there yet, is, ‘I’ve done this 10 times, and I feel very comfortable, just run this [process] automatically.’ You’ll have categories of events — there’ll be 30% that are not super critical. As the organization gets comfortable, these can be completely [automated]. Then there are 50% that are critical, and we can give them diagnostics, and point them in the right direction to solve them pretty quickly. Then 10% will be outliers where no automation can help, and that’s where IT ops experts will always be very, very relevant to run the business.

Another important aspect of observability is informing the development of the product at the other end of the DevOps pipeline. How does your product work within that process?

Kurkal: The people who build the applications know exactly what stresses their application is putting on various elements [of the infrastructure]. We want to equip the DevOps team with a drag-and-drop system to write automation — to tell the IT operations team, here’s the configuration of the infrastructure I’ll need, and here’s a set of diagnostic scripts, and remediation automation that’s pre-approved. And then it’ll be a closed feedback loop where the operations teams can give feedback [to the developer]. We’re not saying we’ll solve every need of the application, but we are trying to bring together these two teams to drive automation and intelligence.

There are some tools that specifically tie outages or incidents to changes in code — could Resolve make another acquisition in that space or further build out its products to address that too?

Kurkal: For us, it’s a strong possibility in late 2020 or in 2021. It might be an organic development of our products, or potentially, an inorganic acquisition around that. But we do see that’s where the market is moving, because no one wants to be reactive, and they want to have it all together.

Go to Original Article
Author:

For Sale – Antec ISK600M HTPC case and Intel bundle

I’ve had this case for a couple of years and has served me well. Will take matx mobos and full ATX PSU. Haven’t had problems with dual fan GPU’s but I suspect some very long, three fan ones may not fit. In very good condition with enough space for up to 9 2.5″ disks or up to 3 3.5″ & 3 2.5″.
Doesn’t come with a PSU but have a couple of 80+ Gold rates ones (SFX and ATX) if interested.

Bundle
Asus matx skt1150
Intel 4670k
8gb ddr3 (2x4GB) HyperX
£110

Location
Bristol
Price and currency
50
Delivery cost included
Delivery is NOT included
Prefer goods collected?
I prefer the goods to be collected
Advertised elsewhere?
Not advertised elsewhere
Payment method
BT

Last edited:

Go to Original Article
Author:

CES 2020: PC Gaming Experiences Designed for Everyone – Xbox Wire

This year’s Consumer Electronics Show (CES) in Las Vegas kicked off 2020 with a look at what’s in store for a variety of players this year, with exciting innovations for PC gaming and Microsoft’s device partners announcing some of the best upcoming hardware and software in the industry.

From the thinnest and lightest gaming laptops yet, to immersive monitors giving players a deeper, more robust experience, in addition to new gaming desktops and graphics cards, there’s plenty for PC gamers to be excited for in the year ahead.

To catch you up on all the news from last week, we’ve wrapped up all the CES 2020 announcements from Acer, Asus, Dell, Lenovo, and iBuyPower below.

Acer

Acer introduced new Predator monitors offering gamers a more immersive and expansive view of their play.

  • The Predator CG552K features a huge 55-inch 4K OLED panel that’s Adaptive Sync and HDMI VRR (Variable Refresh Rate) compatible, making it ideal for hardcore PC and console gamers wanting a higher vantage point. The 37.5-inch monitor increases gaming immersion with a 2300R curved UWQHD+ panel and Vesa DisplayHDR 400 certification that makes colors pop.
  • The 32-inch Predator X32 gaming monitor reproduces brilliant visuals with Nvidia G-Sync Ultimate, Vesa Display HDR 1400 certification and 89.5% Rec. 2020, perfect for gamers who also create their own videos.

Asus

Asus released new Strix gaming desktops, the Zephyrus G14 laptop and TUF laptops presenting device options for every type of gamer.

  • Asus Republic of Gamers (ROG) debuted a handful of new Strix models: Strix GA35 and GT35 gaming desktops to get players tournament-ready for competitive esports. They’re engineered to sustain smooth gameplay under serious pressure and offer the flexibility to do everything from producing top-quality streams to developing games. In addition to those new gaming devices, Asus ROG also announced new Strix GA15 and GT15 gaming desktops that focus on gaming fundamentals for competitive esports players on a budget. Lean and lightweight, these leverage powerful, latest generation processors to capably handle hardcore gaming, streaming and multitasking. These use the latest 3rd Generation AMD Ryzen CPUs and upcoming 10th Generation Intel Core processors.
  • The Zephyrus G14 brings premium innovations to a wider audience with an ultra-slim form factor at just 17.9mm thin and 1.6kg, all without compromising performance. The Zephyrus G14 gaming notebook features RTX graphics for high frame rates when playing popular games, and also lets gamers choose between high refresh or high resolution for their display; the choice of 120Hz refresh rate or WQHD resolution panels allows users to optimize for gaming or creating content. G14 has an optional AniMe Matrix display that deepens personalization, enabling users to show custom graphics, animations and other effects across more than a thousand mini LEDs embedded in the lid.
  • The 15-inch TUF Gaming A15 and F15, along with their 17-inch A17 and F17 siblings, deliver an unprecedented experience for the price. Key to the experience is potent processing power, thanks to a choice between 4th Gen AMD Ryzen Mobile CPUs and upcoming 10th Gen Intel Core processors. Nvidia Turing-based GPUs up to the GeForce RTX 2060 feed frames into fast displays that refresh at up to 144Hz and use AMD FreeSync technology to ensure smoother, tear-free gaming across a wide range of titles.

Dell

Dell announced the new Alienware gaming monitor and a redesigned Dell G5 15 SE laptop with new features and enhanced performance.

  • Built for speed with a 99% sRGB color coverage, the new Alienware 25 Gaming Monitor features fast IPS technology that offers rich colors, a 240Hz refresh rate and a 1 millisecond response time, all in native FHD resolution. It also has AMD Radeon FreeSync and is G-Sync compatible.
  • The newly redesigned Dell G5 15 SE (Special Edition) is the first Dell G Series laptop to feature 3rd Gen AMD Ryzen 4000 H-Series Mobile Processors (up to 8 cores and 16 threads) paired with the latest AMD Radeon RX 5000M Series graphics. The two chips work seamlessly together using AMD SmartShift technology to optimize performance by automatically shifting power as needed between the Ryzen processor and Radeon graphics, giving gamers precisely what they want at each moment of play.

Lenovo

Lenovo released a number of new performance monitors and laptops, giving gamers a variety of devices to choose how they want to enhance their battle experience.

  • With the new premium Lenovo Q27h Monitor, users can seamlessly switch between entertainment and their latest creative project. Its 27-inch QHD (2560 x 1440) provides IPS high-resolution ​and 350 nits of brightness. The four-sided near-edgeless bezel brings a noticeably wider viewing experience when playing the hottest gaming titles in your spare time with super-fast 4ms response time, and a smooth 75Hz refresh rate to reduce motion blur in the game.
  • The Lenovo Legion Y740S is Lenovo’s thinnest and lightest gaming laptop yet with up to eight hours of battery life. It’s got up to 10th Gen Intel Core i9 processors (coming soon) reaching more than 5 GHz and Q-Control, with which users can shift gears with a simple press of their Fn+Q keys. Jump into Performance Mode for higher frame rates, down-shift into Quiet Mode for better battery life to watch a movie or stay the course in Balance Mode for day-to-day usage. Made with long-term gaming usage in mind, enjoy the new tactile feel of the Lenovo Legion keyboards, featuring quick response time with 100% anti-ghosting, improved ergonomic key size and responsive switches designed for smoother typing and gameplay.
  • Stay focused on the game with the new Lenovo Legion Y25-25 Gaming Monitor with a 24.5-inch, Full HD IPS panel display built into the near-edgeless chassis. Crank up refresh rates all the way to 240Hz—more FPS means that more data flows between the GPU and monitor, helping to eliminate tearing in most multiplayer games. It comes with anti-glare panel and up to 400 nits of brightness and is TÜV Rheinland Eye Comfort Certified to reduce eye strain. Curved monitors make gaming more immersive and comfortable, as the curve simulates a more natural viewing experience for your eyes, neck and head—allowing the gamer to see all the action at once.
  • The new 31.5-inch Lenovo G32qc Gaming Monitor has near-edgeless bezel QHD (2560 x 1440) screen resolution for clear visuals and superior picture quality. Catch every player movement with its wide viewing angle, high-screen brightness and excellent contrast ratio.
  • Or, choose the heavy-duty yet compact 27-inch Full HD (1920 x 1080) resolution display on the Lenovo G27c Gaming Monitor — both monitors have a curvature of 1500R for complete game immersion. The latter is engineered to deliver virtually tear-free and stutter-free gameplay and is capable of an amazingly high refresh rate of up to 165Hz, helping to rid gaming distractions such as choppy images, streaks and motion blur.

iBuyPower

 iBuyPower showed off an expansion of its Element Case line and next gen Revolt Series.

  • For a different take on the traditional PC layout, the Element Dual features a binary chamber design. With the PSU mounted vertically on the bottom right side of the case and hidden behind the motherboard tray, users will be left with an open aesthetic on the left side and substantial space for maximum component compatibility. The Element CL case is pre-built systems is designed with an integrated front panel distribution plate for easier bends and less complicated routing.
  • The Revolt GT3 will take on a new aesthetic compared to the asymmetrical design of its predecessors, housing small form factor systems and providing support for ITX motherboards and SFX power supplies up to 750W. Systems constructed in it will be mounted to and suspended inside an outer frame by flexible rubber supports designed to add both cushion from shock and vibration damping.

These are just some of the new products that are bringing powerful experiences to Windows 10 gamers in 2020. Check back on Xbox Wire or the Windows Experience blog to keep up with the latest PC gaming product releases and news.

Go to Original Article
Author: Microsoft News Center

NSA reports flaw in Windows cryptography core

After years of criticism from the infosec community about hoarding critical vulnerabilities, the National Security Agency may be changing course.

The highlight of Microsoft’s first Patch Tuesday of 2020 is a vulnerability in the Windows cryptography core first reported to vendor by the NSA. The flaw in CryptoAPI DLL (CVE-2020-0601) affects Windows 10 and Windows Server 2016 and 2019. According to Microsoft’s description, an attacker could exploit how Windows validates ECC certificates in order to launch spoofing attacks.

The NSA gave a more robust description in its advisory, noting that the Windows cryptography flaw also affects “applications that rely on Windows for trust functionality,” and specifically impacts HTTPS connections, signed files and emails and signed executable code.

“Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities,” NSA wrote in its advisory. “NSA assesses the vulnerability to be severe and that sophisticated cyber actors will understand the underlying flaw very quickly and, if exploited, would render the previously mentioned platforms as fundamentally vulnerable. The consequences of not patching the vulnerability are severe and widespread. Remote exploitation tools will likely be made quickly and widely available.”

Will Dormann, vulnerability analyst at the CERT Coordination Center, confirmed the issue also affects X.509 certificates, meaning an attacker could spoof a certificate chain to a trusted root certificate authority and potentially intercept or modify TLS-encrypted communication.

Johannes Ullrich, fellow at the SANS Internet Storm Center, said the flaw is especially noteworthy because “the affected library is a core component of the Windows operating systems. Pretty much all software doing any kind of cryptography uses it.”

“The flaw is dangerous in that it allows an attacker to impersonate trusted websites and trusted software publishers. Digital signatures are used everywhere to protect the integrity and the authenticity of software, web pages and, in some cases, email,” Ullrich told SearchSecurity. “This flaw could be used to trick a user into installing malicious software. Most endpoint protection products will inspect the digital signature of software the user installs, and consider software created by trusted organizations as harmless. Using this flaw, an attacker would be able attach a signature claiming that the software was created by a trusted entity.”

However, Craig Young, computer security researcher for Tripwire’s vulnerability and exposure research team, said the impact of this Windows cryptography vulnerability might be more limited to enterprises and “most individuals don’t need to lose sleep over this attack just yet.”

“The primary attack vectors most people would care about are HTTPS session compromise malware with spoofed authenticode signatures. The attack against HTTPS however requires that the attacker can insert themselves on the network between the client and server. This mostly limits the attack to nation-state adversaries,” Young told SearchSecurity. “The real risk is more likely to enterprises where a nation state attacker may be motivated to carry out an attack. The worst-case scenario would be that a hostile or compromised network operator is used to replace legitimate executable content from an HTTPS session with malicious binaries having a spoofed signature.”

Beyond patching, NSA suggested network prevention and detection techniques to inspect certificates outside of Windows cryptography validation.

“Some enterprises route traffic through existing proxy devices that perform TLS inspection, but do not use Windows for certificate validation. The devices can help isolate vulnerable endpoints behind the proxies while the endpoints are being patched,” NSA wrote. “Properly configured and managed TLS inspection proxies independently validate TLS certificates from external entities and will reject invalid or untrusted certificates, protecting endpoints from certificates that attempt to exploit the vulnerabilities. Ensure that certificate validation is enabled for TLS proxies to limit exposure to this class of vulnerabilities and review logs for signs of exploitation.”

NSA takes credit

Infosec experts credited the NSA for not only reporting the Windows cryptography flaw but also providing detailed insight and advice about the threat. Chris Morales, head of security analytics at Vectra, based in San Jose, Calif., praised the NSA for recommending “leveraging network detection to identify malicious certificates.”

“I think they did a great job of being concise and clear on both the problem and recommended courses of action,” Morales told SearchSecurity. “Of course, it would be great if the NSA did more of this, but it is not their normal job and I wouldn’t expect them to be accountable for doing a vendor job. Relying on the vendor for notification of security events will always be important.”

Young also commended the NSA’s advisory for being very helpful and providing “useful insights which are not included in either the CERT/CC note or the Microsoft advisory.”

The NSA is designated as the Executive Secretariat of the government’s Vulnerabilities Equities Process (VEP), designed to organize the process of determining what vulnerabilities found by federal agencies would be kept secret and which would be disclosed. However, the NSA has consistently received criticism from experts that it keeps too many vulnerabilities secret and should disclose more in order to help protect the public. In recent years, this criticism was loudest when leaked NSA cyberweapons were used in widespread WannaCry attacks.

The NSA advisory for the Windows cryptography flaw is rare for the agency, which has been more open with warnings about potential threats but hasn’t been known to share more technical analysis.

Also making this vulnerability an outlier is that the NSA was given attribution in Microsoft’s patch acknowledgements section. Anne Neuberger, deputy national manager at the NSA, said on a call with news media Tuesday that this wasn’t the first vulnerability the NSA has reported to Microsoft, but it does mark the first time the agency accepted attribution.

Infosec journalist Brian Krebs, who broke the story of the Windows cryptography patch on Monday, claimed sources told him this disclosure may mark the beginning of a new initiative at NSA to make more vulnerability research available to vendors and the public.

NSA did not respond to requests for comment at the time of this writing.

Go to Original Article
Author:

Meet the 2020 Imagine Cup Asia Regional Finalists

For 18 years, student developers have brought their unique technology solutions to life with Imagine Cup to make a difference in the world around them. Starting with just an idea, students form teams of one to three people and leverage Microsoft technology to develop purpose-driven applications from what they’re most passionate about.

This competition year is no different, and the journey to the 2020 Imagine Cup World Championship is kicking off with the selection of the Asia Regional Finalists! From hundreds of teams who submitted projects to the Asia Online Semifinals, 10 teams have been chosen to advance to the Asia Regional Final in Singapore this February. Encompassing solutions tackling a drug scanning app to monitor authenticity and allergens, to a real-time computer vision physiotherapy tool, to an immersive virtual reality experience so young students can learn about different cultures, these student innovations are truly incredible and have the chance to create global impact.

At the Regional Final, all teams will participate in an Entrepreneur Day and receive in-person pitch training from the U.S. Department of Global Innovation through Science and Technology (GIST), and compete for prizing totaling over USD20,000 in cash plus Azure credits. The top two teams will win spots in the 2020 Imagine Cup World Championship in Seattle, Washington to present their projects live for the chance to take home USD100,000 and a mentoring session with Microsoft CEO, Satya Nadella.

We’re excited to introduce this year’s Asia Regional Finalist teams!

Altruistic

Indonesia

Tanah Airku: Tanah Airku is immersive learning media using books, AR, and VR to deliver a complete cultural learning experience for children from 1st to 3rd grade.

Blume-India

India

Seguro Droga: The team developed an Android application which lets patients scan a drug’s RFID card to determine authenticity using Hyperledger Fabric on Azure VM, manage their drug purchases, and set filters for allergens.

EDVR

Nepal

EDVR: EDVR is a voice-controlled immersive Virtual Reality experience for dyslexic students enrolled in STEM education. EDVR aims to solve the problem of imparting STEM education for students with learning disabilities by enabling them to visualize, comprehend, and conceptualize.

Hollo

Hong Kong

Hollo: Hollo is a Social Technology Enterprise based in Hong Kong. The team is developing a comprehensive tool for NGOs, therapists, and youth living with mental illness to advance therapy practices using technology such as Big Data and Artificial Intelligence.

Muses

China

AI Composition System: Using AI, The Muse Artificial Intelligence Composer is a low-cost solution to create music for commercial use, providing a new solution for some commercial music creations that have lower creative requirements and are more cost effective. 

Nutone 

Japan

NUTONE: The team’s device restores the ability to speak for patients who have lost their voice (through reasons such as laryngectomes).

TAZS

India

FaceTag: The team created a solution for bottlenecked gateways in a daily commute: specifically the entry and exit points at metro stations. FastTag tollways enable commuters to simply walk in, have their face scanned, and have the toll deducted automatically from their wallet.

Team Zest

Singapore

Dr. Rehab: Dr. Rehab is a mobile application for real-time physiotherapy supervision through computer vision. Users can access the rehabilitation exercises assigned to them, follow guided instructions, and receive feedback while completing their exercises.

Tulibot Team

Indonesia

Tulibot: Tulibot is an integrated assistive device to bridge the communication gap for the deaf by providing a smart glove (gesture to text) and smart glasses (speech to text).

Vibra

Singapore

Vibra-Intellisense: Vibra-IntelliSense aims to help companies transition from traditional preventive maintenance to predictive maintenance through the use of sensors. The sensors capture machine vibrations to detect anomalies and recommend maintenance efforts.

Congratulations to our finalists! Follow their competition journey on Twitter and Instagram as they head to Singapore in February to compete in the Regional Final, co-located with Microsoft Ignite | The Tour. Students will have the opportunity to connect with the tech community and get hands-on with the latest in developer tools and cloud technologies.

Are you passionate about using tech for social good to solve some of today’s most pressing challenges? Imagine Cup Asia and EMEA submissions are now closed but Americas regional submissions are open until January 15! Register now for a chance to join students across the globe making an impact with technology.

Go to Original Article
Author: Microsoft News Center

Vendors detail cloud-based backup past, present, future

It’s safe to say cloud-based backup has gone mainstream.

In the last five years, cloud backup grew from something that organizations often greeted with skepticism to a technology that’s at least a part of many businesses’ data protection plans.

Some of that evolution is a result of users getting more comfortable with the idea of backing up data in the cloud and the security there. Some of it is a result of vendors adding functionality such as security, backup of cloud-born software as a service (SaaS) data and other enhancements. Challenges remain, though.

In part one of this feature, several experts in cloud-based backup detailed how the market has developed and what businesses can expect in the years to come. In part two, executives from backup vendors, including cloud backup pioneers, discuss their impressions of the past, present and future of the technology.

How has cloud-based backup evolved in the last five years?

Eran Farajun, executive vice president, Asigra: [Cloud-based backup has] become a lot more mainstream as a service.

Headshot of Asigra's Eran FarajunEran Farajun

Because it’s become so popular, it’s become a target. So, it’s moved from becoming a defensive mechanism to it becoming an attack vector. It’s a way that people get attacked, which has then caused even more evolution in the last, I would say two or three years, where cloud backup now has to include security and safety elements. Otherwise, you’re not going to be able to recover confidently with it.

Hal Lonas, CTO, Carbonite: We have seen a rise in the popularity of cloud, especially over the past five years as it becomes a more scalable and economical solution for businesses — particularly SMBs that are expanding rapidly. It has also been highly embraced by the service provider and solution market.

Public cloud has also come a long way, especially among highly regulated industries such as healthcare and finance. We’re seeing these organizations turn to the cloud more frequently than before, as it provides an easier and more cost-effective way to meet their recovery time objective and recovery point objective requirements.

Danny Allan, CTO, Veeam: The first perspective of customers was, ‘I’ll just take my backups and [move them] to the cloud,’ and there wasn’t really thought given to what that meant.

We’ve become a lot more efficient about the data movement, both in and out, and secondly, there are now options that didn’t exist in the past. If you need to recover data in the cloud, you can, or you can recover back on premises. And if you are recovering it back on premises, you can do that efficiently.

Headshot of Arcserve's Oussama El-HilaliOussama El-Hilali

Oussama El-Hilali, CTO, Arcserve: [There has been] tremendous evolution both in quantity and quality of the cloud backup. We’ve seen a number of vendors emerge to provide backup to the cloud. We’ve seen the size of the backups grow. We’ve seen the number of people who are interested in going to cloud backup grow as well.

I think one of the fundamental things in data protection has been creating the distance between the primary and secondary data, in case of disaster.

Where are we in the story of cloud-based backup? Is it at the height of its popularity?

Farajun: I don’t think it’s at the height. It’s still growing fairly quickly as an overall service. So, it’s not flat; it’s still growing in double-digit figures year over year.

And I think what lends to its popularity is future evolution. It’ll get more secure. It has to be more secure.

There will be new types of workloads that get included as part of your backup service. For example, backing up machines today is fairly common. Backing up containers is not as common today, but it will be in three to five years.

The cloud market is mature and is fast becoming the infrastructure of choice for many companies, whether at the SMB or enterprise level.
Hal LonasCTO, Carbonite

I think cloud backup for SaaS applications [will grow]. A lot of cloud backup services and vendors support Office 365, Salesforce and G Suite, but as more and more end customers adopt more software as a service, the data itself also has to be protected. So, you’ll see more cloud backup functionality and capabilities protect a broader set of SaaS applications beyond just the big ones.

Lonas: The cloud market is mature and is fast becoming the infrastructure of choice for many companies, whether at the SMB or enterprise level. This can be proven with the popularity of Microsoft Azure, AWS and Google along with other cloud providers.

Right now, many still equate cloud with security and while cloud solves some problems, it is not a complete cure. Rather, we will see more cloud-oriented security solutions protecting cloud assets and their specific issues in the upcoming years.

One of the biggest pain points with cloud adoption today is migrating data to these infrastructures. The good news is that there are a number of tools available now to alleviate the traditional issues related to data loss, hours of downtime and diverted key resources.

Allan: We’re not at the height of its popularity. We’re in early stages of customers sending their data into the cloud. It’s been growing exponentially. I know cloud has been around for 10 years, but it’s only really in the last year that customers are actually sending backup data into the cloud. I would attribute that to intelligent cloud backup — using intelligence to know how to do it and how to leverage it efficiently. 

El-Hilali: It’s a good step, but we’re not at the peak, or anywhere close to the peak.

The reason being is that if you look at the cloud providers, whether it’s public cloud like AWS or companies like us, the features are still evolving. And the refinement is still ongoing.

What do you expect in the cloud backup market in the next five years?

Farajun: I think there will be more consolidation. I think that more of the old-school vendors, the big broad vendors, will continue to add more cloud backup service capability as part of their offerings portfolio. They’ll either acquire companies that do it or they will stand up services that do it themselves. There will be more acquisitions by bigger MSPs that buy smaller MSPs because they deliver cloud backup services and they have the expertise.

I think you’ll see an increase of channel partners bringing [cloud-based backup] back in-house and actually being the service provider instead of just being a broker. And that will happen because it adds more value to their business.

And I think you’ll see unfortunately ransomware attacking more and more backup software, whether it’s delivered as a service or on premises, just because it’s so damaging.

Lonas: Looking ahead, we will see cloud backup and data protection continue to gain popularity, especially as businesses implement cyber-resiliency plans.

More organizations now trust the cloud to be available, secure and meet their business needs. We will continue to see Moore’s Law drive down network and storage costs so that businesses can continue to reduce their on-premises footprint. Some of this change is technical, and some is cultural, as most of us trust the cloud in our personal lives more than businesses do; and we expect to see this trend continue to shift for businesses in the future.

Allan: I think there’s going to be a whole emergence of machine learning-based companies that exist only in the cloud, and all they need is access to your data. In the past, what was the problem with machine learning and artificial intelligence on premises? You had to install it on premises to get access to that data or you needed to pick up petabytes of data and get it to that company. If it’s already there, you can imagine a marketplace emerging that will give you value-added services on top of this data.

El-Hilali: I think the potential for DRaaS will continue to grow and I say that because the availability of the data, the spontaneity of recovery, is becoming more of a need than a good-to-have.

Go to Original Article
Author:

For Sale – For parts or complete. Desktop CAD/Photoshop etc. i7, Nvidia quadro…

Selling my project PC. Has been used (successfully) as a CCTV server for the past 18 months – 2 years without ever being pushed. All parts were bought new but no retail packaging. Please assume no warranty. No operating system installed either. Selling as we’ve now upgraded to a dedicated Xeon server. Parts listed below.

Generic desktop tower case.
Supermicro C7H270-CG-ML motherboard.
Intel i7 7700 3.6 ghz with stock cooler.
PNY Nvidia quadro M2000 4gb.
Kingston hyperx fury DDR4 16gb RAM (2x8gb).
Seagate Skyhawk 4tb HDD (NO OS).
ACBEL 300w PSU.

Aside from the PSU this a solid machine with decent potential. Could easily be used for gaming with one or two changes and could be used for CAD or photoshop as is (or just change PSU). This handled HIKVision and up to 56 cameras (we had 13 on screen at any one time, could handle more) but admittedly struggled with playback on any more than four cameras at once (All 4K cameras). The case has a dent or two in it but entirely useable. Did intend to keep it for the Mrs for her photography but she’s bought a MacBook instead.

Cost around £2000 new. Asking £700 including postage but collection preferred (from Plymouth). Very open to offers as I’ve struggled to price this up to be honest.

Cheers, Chocky.

Go to Original Article
Author:

SAP leadership changes and S/4HANA were top 2019 SAP stories

SAP ended the decade with one of the most unsettled years in its 47-year existence.

The year began with layoffs, as the German enterprise giant restructured its workforce in March to the tune of 4,400 fewer employees. This was just the first shoe to drop in a year of organizational turmoil. In April, several prominent members of the SAP leadership ranks left for different pastures. The biggest shoe dropped in October with the abrupt departure of longtime CEO Bill McDermott. He was replaced by co-CEOs Jennifer Morgan and Christian Klein.

Bill McDermottBill McDermott

In 2019, the saga of S/4HANA migrations that has dogged the company for several years continued. The clock is running to get customers to move to the “next-generation ERP” system before 2025, when SAP’s support for legacy ERP systems will end. However, it appears that the migrations to S/4HANA continue to lag.

Here’s a look back at some of the most prominent issues that characterized 2019 for SAP.

SAP leadership changes

The first indication that it might be an interesting year for SAP came in March when the company announced the layoffs of 4,400 employees. The layoffs appeared to be particularly significant for SAP because they included veterans of HANA development and the ABAP programming language which has formed the foundation of HANA. This raised questions about SAP’s commitment to the future of its signature in-memory database platform. The layoffs also hit SAP leadership, as longtime executive Bjoern Goerke was cashiered from his role as CTO.

Jennifer MorganJennifer Morgan

SAP leadership had a major shift in April as several executives left the company, led by cloud business head Rob Enslin, who later surfaced at Google. SAP stayed in-house by promoting veterans Jennifer Morgan, Adaire Fox-Martin and Juergen Mueller. The SAP leadership changes in April led many observers to question the future direction of the company, but the general feeling was that SAP’s deep bench would keep the company stable.

“The fact is, particularly at SAP, there’s a lot of structural organization underneath any individual leader, which keeps that organization moving forward,” analyst Joshua Greenbaum of Enterprise Applications Consulting said at the time. “Some people are saying this is going to have a huge impact across the board and have a direct impact on customers, but I just don’t see it.”

Christian KleinChristian Klein

The deep bench will be put to the test. In October, CEO Bill McDermott, who had led the company for nine years, abruptly resigned. McDermott, who served as the prominent public face of the company, was replaced by Morgan and Klein, who assumed the roles of co-CEO. But the direction under the new SAP leadership remains to be seen.

“It makes sense to pass the baton at this moment, when there’s a transition in place to the cloud internally,” Greenbaum said. “So it’s a good time to freshen up the scene with the two new co-CEOs and it was a good moment for [McDermott] to step out.”

S/4HANA migration debate continues

The questions of if, when and how to migrate to SAP S/4HANA have become perennial issues in the last few years for SAP customers. SAP has been touting the transformative nature of the “next-generation” ERP since it debuted in 2015, but the number of customers actually making the move continues to lag. SAP S/4HANA promises to transform existing business processes and develop new business models by adding intelligence that enables benefits such as real-time decision making and predictive analytics.

The willingness of SAP customers to move to S/4HANA is debatable. In June, a survey from Rimini Street, a firm that provides third-party support services for SAP and Oracle ERP systems, indicated that almost 80% of SAP customers plan to stay on SAP ECC at least until the 2025 deadline. The report showed that SAP customers want to control their own fate rather than be forced into an S/4HANA migration, according to Hari Candadai, Rimini Street global vice president of product marketing.

“SAP customers are now taking control of their roadmaps and are disconnecting, or want to disconnect, from SAP’s planned 2025 end of support to their flagship ECC product line,” Candadai said at the time.

Another survey conducted in December 2018 by ASUG (Americas SAP User Group) showed that more than half of the ASUG members who responded, 56%, said they plan to move to S/4HANA but have not taken concrete steps in that process yet.

“There’s a general sense that they want to do it, they’re planning for it, they’re gathering data and information, and they’re looking for use cases that drive a business case,” Geoff Scott, ASUG CEO, said at the time about the respondents.

Many in the SAP ecosphere say the question of an S/4HANA migration has moved from “if” to “when.” Some, like Chris Carter, CEO of Approyo, a Wisconsin-based SAP partner, believe that S/4HANA migration fears are unfounded and that customers delaying the move are in danger of missing out on the advantages that they will gain from S/4HANA.

“The S/4HANA migration is going to happen and must happen,” Carter said at the time. “It’s not only because of 2025, it’s because of the innovation that SAP is putting into the products, and that partners are putting into the products and the tools.”

For many, however, the lack of a compelling business case is the largest impediment to an S/4HANA migration. In order to make the case, SAP customers need to see an S/4HANA migration as part of a larger effort to transform business processes, according to experts. This can involve developing a strategy around a company’s SAP landscape, including cloud products like Ariba, Concur and SuccessFactors, said Len Riley, commercial advisory practice leader at UpperEdge, a Boston-based IT advisory firm.

Once companies decide that they are going to move to S/4HANA, they will need strong leadership and project management skills to manage the process successfully. Vinci Energies provided a case in point, as the Paris based energy company completed a nine-month project to deploy S/4HANA. The company has begun to realize the benefits of the advanced but simplified financial model that the S/4HANA core.

“We are currently running about 10 billion euros and more than 700 legal entities,” Dominique Tessaro, Vinci Energies’ CIO, said at the time. “All modules are running on a single SAP S/4HANA instance and client.”

The search for the intelligent enterprise

Related to SAP’s efforts to move customers to S/4HANA is the concept of the SAP intelligent enterprise. Although SAP positions S/4HANA as the digital core of the intelligent enterprise, there continues to be a lack of understanding as to what the term really means.

In SAP’s vision, the intelligent enterprise is more than just an implementation of a technology or technologies, but requires a transformative shift in the organization’s culture and processes. This means bringing together operational data, or “O” data, and experiential data, or “X” data, to enable companies to analyze data in new ways and reach decisions that may not have been possible before.

“The intelligent enterprise is about, ‘How do I run my organization in a way that is capable of responding to the outside world and leverages everything I have available?'” said Geoff Maxwell, program manager at SAP Transformation Office.

Still, defining the SAP intelligent enterprise remains elusive because the circumstances and reason for implementing it are unique to each organization, according to Paul Saunders, senior director and analyst at Gartner.

“What makes one organization an intelligent enterprise is not what makes another an intelligent enterprise,” Saunders said. “That’s where SAP can’t dictate what an intelligent enterprise is [and] needs to play a role in helping organizations become an intelligent enterprise.”

However, organizations will need to step up efforts to get to the intelligent enterprise — whatever their definition — in order to keep up with the pace of change in business and technology, said Steve Niesman, CEO of Itelligence, an SAP reseller that provides implementation and managed services for SAP systems.

When O’s and X’s meet

The SAP intelligent enterprise depends on the integration of O data with X data, which was the main reason SAP spent $8 billion on Qualtrics in 2018. How and why this integration will happen was a main concern of SAP in 2019.

At the Sapphire conference in May, former CEO McDermott and Qualtrics CEO Ryan Smith stressed the need for companies to provide an exceptional customer experience in the era of smartphones and social media.

“Today organizations are disproportionally rewarded when they deliver great experiences,” Smith said. “And are absolutely punished when they don’t.”

However, beyond a few interesting use cases, there was no sign yet that SAP customers were jumping headlong into the X and O marriage. The idea may be valid, and potentially valuable, but much like the SAP intelligent enterprise, organizations seem to be approaching Qualtrics-centered integrations cautiously.

“If SAP can change people’s mindsets, if they can show CIOs that this does more than just the back office, then [companies] can bring in their CEO and show it is an end-to-end platform and worth investing in,” Isaac Sacolick, president and CIO of consultancy StarCIO, said at the time.

After a tumultuous 2019, that’s likely top of SAP’s list for 2020.

Go to Original Article
Author:

Dawn of a Decade: The Top Ten Tech Policy Issues for the 2020s

By Brad Smith and Carol Ann Browne

For the past few years, we’ve shared predictions each December on what we believe will be the top ten technology policy issues for the year ahead. As this year draws to a close, we are looking out a bit further. This January we witness not just the start of a new year, but the dawn of a new decade. It gives us all an opportunity to reflect upon the past ten years and consider what the 2020s may bring.

As we concluded in our book, Tools and Weapons: The Promise and the Peril of the Digital Age, “Technology innovation is not going to slow down. The work to manage it needs to speed up.” Digital technology has gone longer with less regulation than virtually any major technology before it. This dynamic is no longer sustainable, and the tech sector will need to step up and exercise more responsibility while governments catch up by modernizing tech policies. In short, the 2020s will bring sweeping regulatory changes to the world of technology.

Tech is at a crossroads, and to consider why, it helps to start with the changes in technology itself. The 2010s saw four trends intersect, collectively transforming how we work, live and learn. Continuing advances in computational power made more ambitious technical scenarios possible both for devices and servers, while cloud computing made these advances more accessible to the world. Like the invention of the personal computer itself, cloud computing was as important economically as it was technically. The cloud allows organizations of any size to tap into massive computing and storage capacity on demand, paying for the computing they need without the outlay of capital expenses. 

More powerful computers and cloud economics combined to create the third trend, the explosion of digital data. We begin the 2020s with 25 times as much digital data on the planet as when the past decade began.

These three advances collectively made possible a fourth: artificial intelligence, or AI. The 2010s saw breakthroughs in data science and neural networks that put these three advances to work in more powerful AI scenarios. As a result, we enter a new decade with an increasing capability to rely on machines with computer vision, speech recognition, and language translation, all powered by algorithms that recognize patterns within vast quantities of digital data stored in the cloud.

The 2020s will likely see each of these trends continue, with new developments that will further transform the use of technology around the world. Quantum computing offers the potential for breathtaking breakthroughs in computational power, compared to classical or digital computers. While we won’t walk around with quantum computers in our pockets, they offer enormous promise for addressing societal challenges in fields from healthcare to environmental sustainability.

Access to cloud computing will also increase, with more data centers in more countries, sometimes designed for specific types of customers such as governments with sensitive data. The quantity of digital data will continue to explode, now potentially doubling every two years, a pace that is even faster than the 2010s. This will make technology advances in data storage a prerequisite for continuing tech usage, explaining the current focus on new techniques such as optical- and even DNA-based storage.

The next decade will also see continuing advances in connectivity. New 5G technology is not only 20 times faster than 4G. Its innovative approach to managing spectrum means that it can support over a thousand more devices per meter than 4G, all with great precision and little latency. It will make feasible a world of ambient computing, where the Internet of Things, or IoT devices, become part of the embedded fabric of our lives, much as electrical devices do today. And well before we reach the year 2030, we’ll be talking about 6G and making use of thousands of satellites in low earth orbit.

All of this will help usher in a new AI Era that likely will lead to even greater change in the 2020s than the digital advances we witnessed during the past decade. AI will continue to become more powerful, increasingly operating not just in narrow use cases as it does today but connecting insights between disciplines. In a world of deep subject matter domains across the natural and social sciences, this will help advance learning and open the door to new breakthroughs.

In many ways, the AI Era is creating a world full of opportunities. In each technological era, a single foundational technology paved the way for a host of inventions that followed. For example, the combustion engine reshaped the first half of the 20th century. It made it possible for people to invent not just cars but trucks, tractors, airplanes, tanks, and submarines. Virtually every aspect of civilian economies and national security issues changed as a result.

This new AI Era likely will define not just one decade but the next three. Just as the impact of the combustion engine took four decades to unfold, AI will likely continue to reshape our world in profound ways between now and the year 2050. It has already created a new era of tech intensity, in which technology is reshaping every company and organization and becoming embedded in the fabric of every aspect of society and our lives.

Change of this magnitude is never easy. It’s why we live in both an era of opportunity and an age of anxiety. The indirect impacts of technology are moving some people and communities forward while leaving others behind. The populism and nationalism of our time have their roots in the enormous global and societal changes that technology has unleashed. And the rising economic power of large companies – perhaps especially those that are both tech platforms and content aggregators – has brought renewed focus to antitrust laws.

This is the backdrop for the top ten technology issues of the 2020s. The changes will be immense. The issues will be huge. And the stakes could hardly be higher. As a result, the need for informed discussion has rarely been greater. We hope the assessments that follow help you make up your own mind about the future we need collectively to help shape.

1. Sustainability – Tech’s role in the race to address climate change

A stream of recent scientific research on climate change makes clear that the planet is facing a tipping point. These dire predictions will catapult sustainability into one of the dominant global policy issues for the next decade, including for the tech sector. We see this urgency reflected already in the rapidly evolving views of our customers and employees, as well as in many electorates around the world. In countries where governments are moving more slowly on climate issues, we’re likely to see businesses and other institutions fill the gap. And over the coming decade, governments that aren’t prioritizing sustainability will be compelled to catch up.

For the tech sector, the sustainability issue will cut both ways. First, it will increase pressure on companies to make the use of technology more sustainable. With data centers that power the cloud ranking among the world’s largest users of electricity, Microsoft and other companies will need to move even more quickly than in recent years to use more and better renewable energy, while increasing work to improve electrical efficiency.

But this is just the tip of the iceberg. Far bigger than technology’s electrical consumption is “Scope 3” emissions – the indirect emissions of carbon in a company’s value chain for everything from the manufacturing of new devices to the production of concrete to build new buildings. While this is true for every sector of the economy, it’s an area where the tech sector will likely lead in part because it can. And should. With some of the world’s biggest income statements and healthiest balance sheets, look to Microsoft and other tech companies to invest and innovate, hopefully using the spirit of competition to bring out the best in each other.

This points to the other and more positive side of the tech equation for sustainability. As the world takes more aggressive steps to address the environment, digital data and technology will prove to be among the next decade’s most valuable tools. While carbon issues currently draw the most attention, climate issues have already become multifaceted. We need urgent and concerted action to address water, waste, biodiversity, and our ecosystems. Regardless of the issue or ultimate technology, insights and innovations will be fueled by data science and artificial intelligence. When quantum computing comes online, this will become even more promising.

By the middle or end of the next decade, the sustainability issue may have another impact that we haven’t yet seen and we’re not yet considering. This is on the world’s geopolitics. As the new decade begins, many governments are turning inward and nations are pulling apart. But sustainability is an issue that can’t be solved by any country alone. The world must unite to address environmental issues that know no boundaries. We all share a small planet, and the need to preserve humanity’s ability to live on it will force us to think and act differently across borders.

2. Defending Democracy – International threats and internal challenges

Early each New Year, we look forward to the release of the Economist Intelligence Unit’s annual Democracy Index. This past year’s report updated the data on the world’s 75 nations the Economist ranks as democracies. Collectively these countries account for almost half of the world’s population. Interestingly, they also account for 95 percent of Microsoft’s revenue. Perhaps more than any other company, Microsoft is the technology provider for the governments, businesses, and non-profits that support the world’s democracies. This gives us both an important vantage point on the state of democracy and a keen interest in democracy’s health.

Looking back at the past decade, the Economist’s data shows that the health of the world’s democracies peaked in the middle of the decade and has since declined slightly and stagnated. Technology-fueled change almost certainly has contributed in part to this trend.

As we enter the 2020s, defending democracy more than ever requires a focus on digital tech. The past decade saw nation-states weaponize code and launch cyber-attacks against the civilian infrastructure of our societies. This included the hacking of a U.S. presidential campaign in 2016, a tactic Microsoft’s Threat Intelligence Center has since seen repeated in numerous other countries. It was followed by the WannaCry and Not-Petya attacks in 2017, which unleashed damage around the world in ways that were unimaginable when the decade began.

The defense of democracy now requires determined efforts to protect political campaigns and governments from the hacking and leaking of their emails. Even more important, it requires digital protection of voter rolls and elections themselves. And most broadly, it requires protection against disinformation campaigns that have exploited the basic characteristics of social media platforms.

Each of these priorities now involves new steps by tech companies, as well as new strategies for and collaboration with and among governments. Microsoft is one of several industry leaders putting energy and resources into this area. Our Defending Democracy Program includes an AccountGuard program that protects candidates in 26 democratic nations, an ElectionGuard program to safeguard voting, and support for the NewsGuard initiative to address disinformation. As we look to the 2020s, we will need continued innovation to address the likely evolution of digital threats themselves.

The world will also need to keep working to solidify existing norms and add new legal rules to protect against cybersecurity threats. Recent years have seen more than 100 leading tech companies come together in a Tech Accord to advance security in new ways, while more than 75 nations and more than 1,000 multi-stakeholder signatories have now pledged their support for the Paris Call for Trust and Security in Cyberspace. The 2020s hopefully will see important advances at the United Nations, support from global groups such as the World Economic Forum, and by 2030, work on a global compact to make a Digital Geneva Convention a reality.

But the digital threats to democracy are not confined to attacks from other nations. As the new decade dawns, a new issue is emerging with potentially profound and thorny implications for the world’s democracies. Increasingly government officials in democratic nations are asking whether the algorithms that pilot social media sites are undermining the political health of their citizenries. 

It’s difficult to sustain a democracy if a population fragments into different “tribes” that are exposed to entirely different narratives and sources of information. While diverse opinions are older than democracy itself, one of democracy’s characteristics has traditionally involved broad exposure to a common set of facts and information. But over the past decade, behavioral-based targeting and monetization on digital platforms has arguably created more information siloes than democracy has experienced in the past. This creates a new question for a new decade. Namely, will tech companies and democratic governments alike need new approaches to address a new weakness for the world’s democracies? 

3.  Journalism – Technology needs to give the news business a boost

While we look to improve the health of the world’s democracies, we need to also monitor the well-being of another system playing a vital role in free societies across the globe: the independent press. For centuries, journalists have served as watch dogs for democracies, safeguarding political systems by monitoring and challenging public affairs and government institutions. As Victorian era historian Thomas Carlyle wrote, “There were Three Estates in Parliament; but, in the Reporters’ Gallery yonder, there sat a Fourth Estate more important far than they all.”

It’s clear that a healthy democracy requires healthy journalism, but newspapers are ailing – and many are on life support. The decline of quality journalism is not breaking news. It has been in slow decline since the start the 20th century with the advent of the radio and later when television overtook the air waves. By the turn of this century, the internet further eroded the news business as dotcoms like Craigslist disrupted advertising revenue, news aggregators lured away readers, and search engines and social media giants devoured both. While a number of bigger papers weathered the storm, most small local outlets were hard hit. According to data from the U.S. Bureau of Labor Statistics’ Occupational Employment Statistics, in 2018, 37,900 Americans were employed in the newsroom, down 14 percent from 2015 and down 47 percent from 2004.

The world will be hard pressed to strengthen its democracies if we can’t rejuvenate quality journalism. In the decade ahead the business model for journalism will need to evolve and become healthier, which hopefully will include partnerships that create new revenue streams, including through search and online ads. And as the world experiments with business models, we can’t forget to learn from and build on the public broadcasters that have endured through the years, like the BBC in the United Kingdom and NPR in the United States.  

Helping journalism recover will also include protecting journalists, as we’ve learned through Microsoft’s work with the Clooney Foundation for Justice. Around the world violence against journalists is on the rise, especially for those reporters covering conflict, human rights abuses, and corruption. According to the Committee to Protect Journalists, 25 journalists were killed, 250 were imprisoned, and 64 went missing in 2019. In the coming decade, look for digital technology like AI to play an important role in monitoring the safety of journalists, spotting threats, and helping ensure justice in the court of law. 

And lastly, it’s imperative that we use technology to protect the integrity of journalism. As the new decade begins, technologists warn that manipulated videos are becoming the purveyors of disinformation. These “deepfakes” do more than deceive the public, they call all journalism into question. AI is used to create this doctored media, but it will also be used to detect deepfakes and verify trusted, quality content. Look for the tech sector to partner with the news media and academia to create new tools and advocate for regulation to combat internet fakery and build trust in the authentic, quality journalism that underpins democracies around the world.

4. Privacy in an AI Era – From the second wave to the third

In the 2010s, privacy concerns exploded around the world. The decade’s two biggest privacy controversies redefined big tech’s relationships with government. In 2013, the Snowden disclosures raised the world’s ire about the U.S. Government’s access to data about people. The tech sector, Microsoft included, responded by expanding encryption protection and pushing back on our own government, including with litigation. Five years later, in 2018, the guns turned back on the tech sector after the Cambridge Analytica data scandal engulfed Facebook and digital privacy again became a top-level political issue in much of the world.

Along the way, privacy laws continued to spread around the world. The decade saw 49 new countries adopt broad privacy laws, adding to the 86 nations that protected privacy a decade ago. While the United States is not yet on that list, 2018 saw stronger privacy protections jump from Europe across the Atlantic and move all the way to the Pacific, as California’s legislature passed a new law that paves the way for action in Washington, D.C.

But it wasn’t just the geographic spread of privacy laws that marked the decade. With policy innovation centered in Brussels, the European Union effectively embarked on a second wave of privacy protection. The first wave was characterized by laws that required that web sites give consumers “notice and consent” rights before using their data. Europe’s General Data Protection Regulation, or GDPR, represented a second wave. It gives consumers “access and control” over their data, empowering them to review their data online and edit, move, or delete it under a variety of circumstances.

Both these waves empowered consumers – but also placed a burden on them to manage their data. With the volume of data mushrooming, the 2020s likely will see a third wave of privacy protection with a different emphasis. Rather than simply empowering consumers, we’re likely to see more intensive rules that regulate how businesses can use data in the first place. This will reach data brokers that are unregulated in some key markets today, as well as a focus on sensitive technologies like facial recognition and protections against the use of data to adversely impact vulnerable populations. We’re also likely to see more connections between privacy rules and laws in other fields, including competition law.

In short, fasten your seat belt. The coming decade will see more twists and turns for privacy issues.

5. Data and National Sovereignty – Economics meet geopolitics

When the combustion engine became the most important invention a century ago, the oil that fueled it became the world’s most important resource. With AI emerging as the most important technology for the next three decades, we can expect the data that fuels it to quickly become the 21st century’s most important resource. This quest to accumulate data is creating economic and geopolitical issues for the world.

As the 2020s commence, data economics are breeding a new generation of public policy issues. Part of this stems from the returns to scale that result from the use of data. While there are finite limits to the amount of gasoline that can be poured into the tank of a car, the desire for more data to develop a better AI model is infinite. AI developers know that more data will create better AI. Better AI will lead to even more usage for an AI system. And this in turn will create yet more data that will enable the system to improve yet again. There’s a risk that those with the most data, namely the first movers and largest companies and countries, will overtake others’ opportunity for success.

This helps explain the critical economic issues that are already emerging. And the geopolitical dynamics are no less vital.

Two of the biggest forces of the past decade – digital technology and geopolitics – pulled the world in opposite directions. Digital technology transmitted data across borders and connected people around the world. As technology brought the world together, geopolitical dynamics pulled countries apart and kindled tensions on issues from trade to immigration. This tug-of-war explains one reason a tech sector that started the decade as one of the most popular industries ended it under scrutiny and with mounting criticism.

This tension has created a new focus that is wrapped into a term that was seldom used just a few years ago – “digital sovereignty.” The current epicenter for this issue is Western Europe, especially Germany and France. With the ever-expanding ubiquity of digital technology developed outside of Europe and the potential international data flows that can result, the protection and control of national data is a new and complicated priority, with important implications for evolving concepts of national sovereignty.

The arrival of the AI Era requires that governments think anew about balancing some critical challenges. They need to continue to benefit from the world’s most advanced technologies and move a swelling amount of data across borders to support commerce in goods and services. But they want to do this in a manner that protects and respects national interests and values. From a national security perspective, this may lead to new rules that require that a nation’s public sector data stays within its borders unless the government provides explicit permission that it can move somewhere else. From an economic perspective, it may mean combining leading international technologies with incentives for local tech development and effective sovereignty protections.

All this has also created the need for open data initiatives to level the playing field. Part of this requires opening public data by governments to provide smaller players with access to larger data sets. Another involves initiatives to enable smaller companies and organizations to share – or “federate” – their data, without surrendering their ownership or control in the data they share. This in turn requires new licensing approaches, privacy protections, and technology platforms and tools. It also requires intellectual property policies, especially in the copyright space, that facilitate this work.

During the first two decades of this century, open source software development techniques transformed the economics of coding. During the next two decades, we’ll need open data initiatives that do the same thing for data and AI.

The past year has seen some of these concepts evolve from political theory to government proposals. This past October, the German Government proposed a project called GAIA-X to protect the country’s digital sovereignty. A month later, discussions advanced to propose a common approach that would bring together Germany and France.

It’s too early to know precisely how all these initiatives will evolve. For almost four centuries, the world has lived under a “Westphalian System” defined by territorial borders controlled by sovereign states. The technology advances of the past decade have placed new stress on this system. Every aspect of the international economy now depends on data that crosses borders unseen and at the speed of light. In an AI-driven economy and data-dependent world, the movement of data is raising increasingly important questions for sovereignty in a Westphalian world. The next decade will decide how this balance is struck.

6. Digital Safety – The need to constantly battle evolving threats

The 2010s began with optimism that new technology would advance online safety and better protect children from exploitation. It ended with a year during which terrorists and criminals used even newer technology to harm innocent children and adults in ways that seemed almost unimaginable when the decade began. While the tech sector and governments have moved to respond, the decade underscores the constant war that must be waged to advance digital safety.

Optimism marked the decade’s start in part because of PhotoDNA, developed in 2009 by Microsoft and Hany Farid, then a professor at MIT. The industry adopted it to identify and compare online photos to known illegal images of child exploitation. Working with key non-profit and law enforcement groups, the technology offered real hope for turning the tide against the horrific exploitation of children. And spurred on by the British Government and others, the tech sector took additional steps globally to address images of child pornography in search results and on other services.

Yet as the New York Times reported in late 2019, criminals have subsequently used advancing video and livestreaming technologies, as well as new approaches to file-sharing and encrypted messaging, to exploit children even more horrifically. As a result, political pressure is again pushing industry to do more to catch up. It’s a powerful lesson of the need for constant vigilance.

Meanwhile, online safety threats become more multifaceted. One of the decade’s tragic days came on March 15, 2019 in Christchurch, New Zealand. A terrorist and white supremacist used livestreaming on the internet as the stage for mass shootings at two mosques, killing 51 innocent civilians.

Led by Prime Minister Jacinda Ardern, the New Zealand Government spearheaded a global multi-stakeholder effort to create the Christchurch Call. It has brought governments and tech companies together to share information, launch a crisis incident protocol, and take other steps to reduce the possibility of others using the internet in a similar way in the future.

All of this has also led to new debate about the continued virtues of exempting social media platforms from legal liability for the content on their sites. Typified by section 230 of the United States’ Communications Decency Act, current laws shield these tech platforms from responsibilities faced by more traditional publishers. As we look to the 2020s, it seems hard to believe that this approach will survive the next decade without change.

7. Internet Inequality – A world of haves and have-nots

In 2010, fewer than a third of the world’s population had access to the internet. As this decade concludes, the number has climbed to more than half. This represents real progress. But much of the world still lacks internet access. And high-speed broadband access lags much farther behind, especially in rural areas.

In an AI Era, access to the internet and broadband have become indispensable for economic success. With public discussion increasingly focusing on economic inequality, we need to recognize that the wealth disparity in part is rooted in internet inequality.

There are many reasons to be optimistic that there will be faster progress in the decade ahead. But progress will require new approaches and not just more money.

This starts with working with better data about who currently has interest access and at what speeds. Imagine trying to restore electric power to homes after a big storm without accurate data on where the power is out. Yet that’s the fundamental reality in a country such as the United States when we discuss closing the broadband gap. The country spends billions of dollars a year without the data needed to invest it effectively. And this data gap is by no means confined to North America.

Better data can make its best contribution if it’s coupled with new and better technology. The next decade will see a world of new communications technologies, from 5G (and ultimately 6G) to thousands of low Earth orbiting satellites and terrestrial technologies like TV White Spaces. All of this is good news. But it will be essential to focus on where each technology can best be used, because there is no such thing as a one-size-fits-all approach for communications technology. For example, 5G will transform the world, but its signals travel shorter distances, making it less than optimal for many scenarios in rural areas.

With better data and new technology, it’s possible to bring high speed internet to 90 percent of the global population by 2030. This may sound ambitious, but with better data and sounder investments, it’s achievable. Internet equality calls for ambition on this level.

8. A Tech Cold War – Will we see a digital iron curtain down the Pacific?

The new decade begins with a tech question that wasn’t on the world’s radar ten years ago. Are we witnessing the start of a “tech cold war” between the United States and China? While it’s too early to know for certain, it’s apparent that recent years have been moving in this direction. And the 2020s will provide a definitive answer.

The 2010s saw China impose more constraints on technology and information access to its local market. This built on the Great Chinese Firewall constructed a decade before, with more active filtering of foreign content and more constraints on local technology licenses. In 2016, the Standing Committee of the National People’s Congress adopted a broad Cyber Security Law to advance data localization and enable the government to take “all necessary” steps to protect China’s sovereignty, including through a requirement to make key network infrastructure and information systems “secure and controllable.” Combined with other measures to manage digital technology that have raised human rights concerns, these policies have effectively created a local internet and tech ecosystem that is distinct from the rest of the world.

This Chinese tech ecosystem in the latter half of the decade also grew increasingly competitive. The pace and quality of innovation have been impressive. With companies such as Huawei, Ali Baba, and Tencent gaining worldwide prominence, Chinese technology is being adopted more globally while its own market is less open – and at the same time that it’s subject to Chinese cyber security public policies. 

As the 2010s close, the United States is responding with new efforts to contain the spread of Chinese technology. It’s not entirely different from the American efforts to contain Russian ideology and influence in the Cold War that began seven decades ago. Powered in part by American efforts to dissuade other governments from adopting 5G equipment from China, tensions heightened in 2019 when the U.S. Department of Commerce banned American tech companies from selling to Huawei components for its products.

In both Washington and Beijing, officials are entering the new decade preparing for these tensions around technology to harden. The implications are huge. Clearly, the best time to think about a Tech Cold War is before it begins. The Cold War between the United States and Soviet Union lasted more than four decades and impacted virtually every country on the planet. As we look ahead to the 2020s, the strategic questions for each country and the implications for the world are no smaller.

9. Ethics for Artificial Intelligence – Humanity needs to govern machines

For a world long accustomed to watching robots wreak havoc on the silver screen, the last few years have brought advances in artificial intelligence that still fall far short of the capabilities seen in science fiction, but are well beyond what had seemed possible when the decade began. While typically still narrow in scope, AI enters a new decade with an increasing ability to match human perception and cognition in vision, speech recognition, language translation, and machine learning based on discerning patterns in data.

In a decade that increasingly gave rise to anxiety over the impact of technology, it’s not surprising that these advances unleashed a wave of discussions focused on AI and its implications for ethics and human rights. If we’re going to empower machines to make decisions, how do we want these decisions to be made? This is a defining question not just for the decade ahead, but for all of us who are alive today. As the first generation of people to give machines the power to make decisions, we have a responsibility to get the balance right. If we fail, the generations that follow us are likely to pay a steep price.

The good news is that companies, governments, and civil society groups around the world have embraced the need to develop ethical and human rights principles for artificial intelligence. We published a set of six ethical principles at Microsoft in January 2018, and we’ve been tracking the trends. What we’re seeing is a global movement towards an increasingly common set of principles. It’s encouraging.

As we look to the 2020s, we’re likely to see at least two new trends. The first is the shift from the articulation of principles to the operationalization of ethics. In other words, it’s not sufficient to simply state what principles an organization wants to apply to its use of AI. It needs to implement this in more precise standards backed up by governance models, engineering requirements, and training, monitoring, and ultimately compliance. At Microsoft we published our first Responsible AI Standard in late 2019, spelling out many of these new pieces. No doubt we’ll improve upon it during the next few years, as we learn both from our own experience and the work of many others who are moving in a similar direction.

The second trend involves specific issues that are defining where “the rubber meets the road” for ethical and human rights concerns. The first such issue has involved facial recognition, which arguably has become a global policy issue more rapidly than any previous digital tech issue. Similar questions are being discussed about the use of AI for lethal autonomous weapons. And conversations are starting to focus on ethics and the use of algorithms more generally. This is just a beginning. By 2030, there will likely be enough issues to fill the table of contents for a lengthy book. If there’s one common theme that has emerged in the initial issues, it’s the need to bring together people from different countries, intellectual disciplines, and economic and government sectors to develop a more common vocabulary. It’s the only way people can communicate effectively with each other as we work to develop common and effective ethical practices for machines.

10. Jobs and Income Inequality in an AI Economy – How will the world manage a disruptive decade?

It’s clear that the 2020s will bring continued economic disruption as AI enables machines to replace many tasks and jobs that are currently performed by people. At the same time, AI will create new jobs, companies, and even industries that don’t exist today. As we’ve noted before, there is a lot to learn from the global economy’s transition from a horse-powered to automobile-driven economy a century ago. Like foundational technologies before it, AI will likely create something like an economic rollercoaster, with an uneven match between prosperity and distress during particular years or in specific places.

This will create many big issues, and two are already apparent. The first is the need to equip people with the new skills needed to succeed in an AI Economy. During the 2010s, technology drove globalization and created more economic opportunity for people in many developing economies around the world, perhaps especially in India and China. The resulting competition for jobs led not only to political pressure to turn inward in some developed nations, but to a recognition that economic success in the future requires more investments in education. As we saw through data published by LinkedIn, in a country like the United States there emerged a broadened interest in Europe’s approach to apprenticeships and technical skills and the pursuit of a range of post-secondary credentials. Given the importance of this trend, it’s not surprising that there was also broader political interest in addressing the educational costs for individuals pursuing these skills.

There’s every reason to believe that these trends will accelerate further in the decade ahead. If anything, expanding AI adoption will lead to additional economic ripple effects. We’re likely to see employers and governments alike invest in expanded learning opportunities. It has become a prerequisite for keeping pace.

In many ways, however, this marks the beginning rather than the conclusion of the economic debates that lie ahead. Four decades of technological change have already contributed to mounting income inequality. It’s a phenomenon that now impacts the politics of many communities and countries, with issues that range from affordable housing to tax rates, education and healthcare investments, and income redistribution.

All this raises some of the biggest political questions for the 2020s. It reminds us that history’s defining dates don’t always coincide with the start of a new decade. For example, one of the most important dates in American political history came on September 14, 1901. It was the day that Theodore Roosevelt succeeded to the United States Presidency. More than a century later, we can see that it represented the end of more than 30 years that combined advancing technology with regulatory restraint, which led to record levels of both prosperity and inequality. In important respects, it was the first day of the Progressive Era in the United States. Technology continued to progress, but in a new political age that included stronger business regulation, product liability laws, antitrust enforcement, public investment, and an income tax.

As we enter the 2020s, political leaders in many countries are debating whether to embark on a similar shift. No one has a crystal ball. But increasingly it seems like the next decade will usher in not only a new AI Era and AI Economy, but new approaches to politics and policy. As we’ve noted before, there’s a saying that “history doesn’t repeat itself, but it often rhymes.” From our vantage point, there seems a good chance that the next decade for technology and policy will involve some historical poetry.

Go to Original Article
Author: Microsoft News Center