Boston-based SevOne Inc. announced this week a partnership with Versa Networks to extend its software-defined WAN monitoring capabilities to Versa’s Secure Cloud IP Platform.
With SevOne’s SD-WAN monitoring update,Versa SD-WAN customerswillgain real-time and historical visibility into their networks, so they can keep track of their cloud-based network services and applications, according to a SevOne statement.
SevOne’s SD-WAN monitoring service helps enterprises and managed service providers reduce or eliminate network management and monitoring risks when deployingSD-WANs in their network infrastructures. SevOne collects data from Versa SD-WAN controllers and devices, so operations and engineering teams can do the following:
understand and report on site-to-site performance;
visualize transmitted traffic’s assigned path and class and ensure proper policy configuration;
use custom dashboards and key performance indicators that align with specific organizational roles in infrastructure;
follow end-to-end application performance during service transitions;
HPE reveals joint cloud services platform with Microsoft
Hewlett Packed Enterprise (HPE) announced this week five hybrid cloud services it developed with Microsoft to provide SMBs with essential hybrid cloud tools and simpler setup. HPE also unveiled two new servers built specifically for smaller enterprises.
HPE’s new hybrid cloud offerings are Hybrid File and Backup, Hybrid Web Hosting, Hybrid Virtualization, Hybrid Development and Test, and Hybrid Database. The services aim to improve enterprise productivity and reduce the need for in-house expertise by providing simplified deployment capabilities, the company said in a statement. Each offering works with the new HPE servers: ProLiant DL20 and ML30 Gen10.
The HPE ProLiant DL20 Gen10 server is designed for space-constrained environments. It supports a range of Intel processor choices and storage capacity options to provide better flexibility and performance, HPE said. The HPE ProLiant ML30 Gen10 Server supports core business applications and is well-suited to meet specific application or performance goals.
HPE also introduced HPE Rapid Setup Software, which includes a guided installation process to save customers time during system installation and configuration.
HPE unveiled its new offerings during the SpiceWorld 2018 conference in Austin, Texas.
Aerohive brings cloud management to A3 service
Aerohive Networks, a cloud networking provider, has updated its A3 secure access management services with cloud management.
The A3 service was originally released in May 2018 as an on-premises service, but it now integrates with microservices cloud management architecture using Kubernetes containers, Aerohive said in a statement.
A3 offers cloud-based monitoring for all customer sites and allows full lifecycle management of access network infrastructure components, such as access points, routers and switches. The update also includes secure access management and network access control through a single management console. On-site enforcement nodes take on localized tasks, such as device onboarding and access control enforcement, Aerohive said.
Updated features also include an automated and GUI-based configuration process, which enables a complete A3 platform cluster setup in six clicks, the company said. A3 cloud management supports integration with any network, and it is available now at no additional charge to A3 customers.
Decided to sell HTPC as want to go other routes. Mint condtion, boxes, warranty.
This little PC is very powerful and can be easily used as main PC, obviously perfect for HTPC. All drivers/updates/bios are installed. In case you don’t know, g4600 is basically 99% of i3 7100. There is 2 x 2.5″ hdd/ssd slot and 1 x m.2 2280 but it has to be pci-e.
Asrock Deskmini G4600 with intel stock cooler 2x4gb 2133Mhz Samsung DDR4 120gb Kingston A400 SSD Intel AC WIFI card with antennas Windows 10 Pro
Price and currency: 210 Delivery: Delivery cost is not included within my country Payment method: BT/PPG/Cash on collection Location: London Advertised elsewhere?: Advertised elsewhere Prefer goods collected?: I prefer the goods to be collected
______________________________________________________ This message is automatically inserted in all classifieds forum threads. By replying to this thread you agree to abide by the trading rules detailed here. Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:
Landline telephone number. Make a call to check out the area code and number are correct, too
Name and address including postcode
Valid e-mail address
DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.
Google today updated its cloud object storage with new dual-regional options and higher availability service-level agreements. Google claims the new options will further differentiate its offerings from market leaders Amazon and Microsoft Azure.
The dual-regional Google Cloud Storage option gives customers greater control over where they store their data, while providing geographic redundancy in the event of an outage. The previous default for a Google Cloud Storage object was a larger multi-regional bucket. Google also offers customers the choice of a single-regional bucket.
Dual-region Google Cloud Storage option locations
The dual-regional Google Cloud Storage option is currently available in beta-test mode for U.S. data centers in Iowa and South Carolina, and European Union (EU)-based data centers in the Netherlands and Finland. Dominic Preuss, a Google director of product management, said the public cloud provider typically beta tests new capabilities for about two months. He said Google plans to add more dual-region pairs based on customer demand.
“If you wanted to do this on another public cloud vendor, you would have to put your data into a region, set up replication, and pay network traffic to replicate that data over,” Preuss said. “You wouldn’t be able to predict what the cost was going to be unless you understood all the network connectivity.”
Preuss said Google dual-region customers pay a single price because they do not need to set up replication and pay for storage in two locations plus network traffic fees. He expects the new dual-regional Google Cloud Storage option to be especially helpful with analytics and big data workloads. Those workloads run faster when data is close to compute resources. The company said the dual-regional option should also help customers that use a content delivery network.
Preuss said Twitter is already using the new dual-regional option as it moves more than 300 PB of Hadoop data into Google Cloud Storage. Twitter wanted to ensure the data is geographically redundant from an availability standpoint and also needed to run data processing jobs in the same region where the data is stored, Preuss said. He said Google previously stored copies of the data in multiple regions without telling customers the specific regions where the data was kept.
Availability SLA raised to 99.9%
Google Cloud Storage today also added a higher availability service-level agreement (SLA) for the Nearline and Coldline cloud storage in multi-regional locations in the U.S., EU or Asia. Google is boosting the SLA from 99.0% to 99.9% for its Nearline and Coldline storage. Nearline is for customers who access data less than once a month, and Coldline is for those expecting to access data less than once a year.
“Two nines of availability is not a very high level if you’re using mission-critical data. Getting the three nines of availability is significant for customers that rely on that data to run their businesses,” Preuss said.
He said the company is trying to simplify Google Cloud Storage options by asking customers to choose their desired level of redundancy (multi-regional, dual-regional or regional) and rate of access (Standard, Nearline or Coldline). That will enable them to better predict the cost of using cloud storage, he said.
Also today, Google launched a new C++ client library for use with its cloud storage. Preuss said Google was responding to customer demand, especially from the gaming and oil and gas industries that run large-scale jobs dependent on the C++ programming language. Google also supports Go, Java, .Net, Node.js, PHP, Python, and Ruby.
Caringo enhanced performance and added cloud tiering upgrades to its Swarm object storage, and added a single-server appliance that allows organizations to start with smaller implementations.
The new Swarm Single Server option gives customers an entry point of 96 TB of raw capacity. It uses hardware from vendor 45 Drives, a division of Protocase. Like most object storage, Caringo Swarm software is designed to scale out to hundreds of petabytes across clusters of servers. The vendor has sold preconfigured M, S and E Series servers on Dell hardware since 2016, with a minimum configuration of four servers at a starting raw capacity of 288 TB.
The Caringo Swarm 10 release includes the SwarmNFS 2.1 file-to-object converter with improved streaming performance, and added policy-based data tiering to multiple public clouds in the FileFly data migration component.
Caringo CEO Tony Barbagallo said the idea for the Swarm Single Server came from feedback from small movie studios and post-production houses that found it too complex to set up an object storage cluster. He said Caringo had to optimize Swarm software to get it to work on a single appliance.
“In any typical object storage environment, you have to make that initial investment in three or four physical servers to get started,” Barbagallo said. “We broke through that barrier. We had to make enhancements and optimizations in the base software to allow these things to happen.”
Caringo containerized its Swarm storage software operating system to run on a 15-drive-bay single box. The appliance includes 10 HDDs for storage, two SSDs for the Elasticsearch indexing database and two SSDs to boot the virtual machines, leaving one spare bay if needed. The Swarm Single Server uses a 4:2 erasure coding configuration with four HDDs for data and two for parity, and spreads the bits across the 10 storage drives. Customers have the option to change the erasure coding or replication policies.
Scott Sinclair, a senior analyst at Enterprise Strategy Group, said the single server can shorten both deployment times and the sales cycle.
“Object storage traditionally has had long sales cycles because it tends to involve large deployments with multiple servers in clusters,” Sinclair said. “But we’re increasingly seeing organizations that want the scale and metadata capabilities of object storage in a smaller form factor that they can deploy very quickly. Time to provisioning is becoming incredibly important. Companies can’t wait six months to stand things up anymore. Part of this mindset comes from the cloud, where they can go and provision infrastructure very quickly. And companies like Caringo that were predominantly on premises have to sit down and respond.”
Marc Staimer, president of Dragon Slayer Consulting, said most organizations don’t need the petabyte-range scale that object storage promises.
“The vast majority of people don’t need huge amounts of storage,” Staimer said. “If you look at the thousands and thousands of organizations out there, there’s a reason a lot of them are going to the cloud. It’s not because they need huge amounts of storage. It’s because they want convenience.”
Barbagallo said the Swarm Single Server might also appeal to larger production companies that store data on a project basis, or serve as a backup appliance for small enterprises outside of media and entertainment. He said the media asset manager software that many companies use now supports the Amazon S3 API for back-end storage, so users could swap out NAS boxes for Caringo object storage.
The Caringo Swarm Single Server lists at $49,995, with Swarm software components licensed separately.
FileFly 3.0 features multi-cloud support
Caringo also upgraded its FileFly migration software. FileFly 3.0 is the first version of the product that can run independently of Swarm. FileFly does the file-to-object conversion and enables users to tier data from Windows and NetApp file servers to Amazon Web Services (AWS), Microsoft Azure and Google Cloud.
One drawback to the FileFly tiering is that users need to rehydrate the data in the original Windows or NetApp storage in order to use it, according to Staimer. He likened the Caringo approach to hierarchical storage management products.
Adrian Herrera, vice president of marketing at Caringo, said FileFly’s data tiering to public cloud storage is intended for remote sites or disaster recovery. He said customers can also tier data to Swarm, where it is directly viewable and accessible.
Herrera said Caringo noticed that large organizations, especially in content-driven industries, want to bring their data back from public-cloud to on-premises deployments or build out their own storage services internally.
“There are additional costs for organizations that continue to access data, and that makes the cloud expensive,” Herrera said. “We’re also seeing a lot of security and copyright concerns. Especially in the media and entertainment space, organizations really need to keep that data in-house.”
Caringo Swarm 10 performance
Caringo cited a U.K.-based research institution’s proof of concept to explore the possibility of replacing a parallel file system. Barbagallo said the institution achieved 35 GB per second read throughput and 12.5 GB per second write throughput with the Swarm 10 object storage. He said the new SwarmNFS 2.1 software enabled file data to be ingested into Swarm object storage at a rate of 1.6 GB per second sustained, with no front-end caching.
“With all object stores, the read and write speeds are going to be dependent on the particular hardware that it’s running on,” Barbagallo said. He said the main challenge for the research institution was removing the network bottleneck through the use of a 100 Gigabit Layer 3 Ethernet network and 40 Gigabit Ethernet switches to each of the racks.
The new Caringo Swarm 10 software adds support for multiple VLANs and subnets. Swarm 10 also supports the latest Elasticsearch 5 technology, enabling analysis tools such as Grafana and Kibana to run directly from the metadata indexed into the open source search engine.
On the management front, Caringo combined the content and storage hardware administration user interfaces (UIs) and added new capabilities in Swarm 10. For instance, users can now edit object metadata from the UI in addition to making changes programmatically, as they could in the past, Barbagallo said. He said users do not need to copy an object into a new object to edit or add metadata, as they do with many other object stores.
Some organizations feel more secure keeping their servers hosted in their own data centers rather than the cloud. However, many in IT might not realize this makes their machines very attractive targets for hackers.
Malicious actors actively target servers in the data center to tap into the sheer amount of bandwidth and processing power they can use for their own nefarious deeds. In this article, we will look at how we uncovered and disabled PowerShell malware that ran its workload on a specific schedule.
System performance takes a nosedive
We knew something was wrong because the issues were obvious. The machine, whether it was a physical or a virtual server, would slow to a halt. We launched Task Manager and checked the details to see multiple instances of PowerShell.exe consuming as much CPU as possible.
We killed the PowerShell.exe processes, but it was a temporary fix, as the processes would respawn every two hours to restart its cryptomining operation.
To stop the PowerShell malware from running, we used this command to stop the automated process:
taskkill /IM PowerShell.exe /F
A look at Task Scheduler yielded no obvious clues. Normally, malware modifies a Windows file or creates a similar one that the antivirus does not pick up in a hidden folder. In this case, there were no hidden folders and all the system files looked normal when checking the creation date.
To dig deeper into the cause, we turned on PowerShell logging with Group Policy using the administrative templates. Within 30 minutes, we checked the logs and found the machine talking to several external IP addresses.
After identifying the external IPs, we created a new static route to send the traffic to nowhere. To do this, we launched an elevated command prompt and ran the following command, replacing X.X.X.X with the default gateway address:
route ADD 18.104.22.168 MASK 255.255.255.255 X.X.X.X if 1 -p
We ran the same command for the other external IPs found in the logs. Now, when the PowerShell malware tried to start, it would stop because it could not talk to the internet.
We removed the default gateway on servers that didn’t require access to the internet. This offered the same result as null routing the IP and killed the process immediately after it launched.
Remove the infection from the WMI database
[embedded content] Ways to defend your infrastructure from PowerShell-based malware
gwmi -Namespace “root/subscription” -Class __EventFilter | Where Name -eq “SCM Event Filter” | Remove- WMIObject
gwmi -Namespace “root/subscription” -Class __EventConsumer | Where Name -eq “SCM Event Consumer” | Remove- WMIObject
Take extra steps to be safe
We also discovered the malware used IPv6 to tunnel traffic to IPv4. If you have a server that does not require IPv6, I suggest removing it by updating the registry and then removing the pass-through with the following commands:
CenturyLink has launched a self-serve, pay-as-you-go service that lets companies create a temporary virtual Ethernet connection between AWS and a colocation facility or office location.
The new CenturyLink service, called Cloud Connect Dynamic Connections, is available to businesses that use any of the 2,200 colocation data centers with CenturyLink optical fiber. The service is also accessible by companies with offices located in more than 100,000 commercial buildings with the carrier’s fiber.
Geographically, Cloud Connect Dynamic Connections is available in North America, Asia Pacific and Europe. The broad accessibility stems from CenturyLink’s $34 billion acquisition of Level 3 Communications in 2017.
Level 3’s software-defined networking technology powers Cloud Connect Dynamic Connections, which creates a private Layer 2 connection to AWS. A corporate engineer would create the link by logging into the CenturyLink portal and choosing the location he wants the carrier to connect to the AWS data center where his company’s applications are running.
New CenturyLink service bills by the hour
CenturyLink charges by the hour for the virtual circuit, which the engineer would delete when it’s no longer needed. If the link remains after five to 10 days — CenturyLink hasn’t decided on the exact time — the carrier will switch the connection from the hourly rate to a monthly subscription.
Companies sometimes need temporary circuits to transfer data to or from applications running on AWS. Companies can choose circuit speeds ranging from 5 Mbps to 3 Gbps, depending on the amount of data headed to the cloud provider. CenturyLink plans to up the maximum speed to 30 Gbps soon, said Chris McReynolds, vice president of core network services at CenturyLink.
Dynamic Connections is a part of CenturyLink’s Cloud Connect service, which provides long-term links to AWS through the cloud provider’s private network connection, called Direct Connect. Companies use the CenturyLink service when corporate policy or government regulations prohibit them from using the public internet.
Last month, CenturyLink used the broader fiber optic coverage it received with Level 3 to make its software-defined WAN service available in more than 30 countries.
The acquisition resulted in CenturyLink exiting the colocation business. Less than a week after announcing the Level 3 takeover, CenturyLink sold its 57 colocation data centers to private equity consortium BC Partners for $2.15 billion in cash. The carrier used the money to help finance the Level 3 deal.
In celebration of the 48-hr Skype-a-Thon event beginning on November 13, our 22 educators and experts are going the extra (virtual) mile and preparing a new #MSFTEduChat TweetMeet on Global Collaboration. Paired with the resources made by our Skype in the Classroom team, this event will help you prepare your students to travel far and wide and learn everything in-between during Skype-a-Thon.
You can join our special #MSFTEduChat TweetMeet on Tuesday, October 16, at 10:00 a.m. PDT (check your time zone here) and start absorbing ideas from our hosts – they’re well-seasoned travelers as far as Skype-a-Thon goes! (Sounds great, but what’s a TweetMeet?)
In the spirit of collaboration across continents, we have 3 brand-new language tracks this month: Русский (Russian), বাংলা (Bangla) and 日本語 (Japanese). We also offer Español (Spanish), Français (French), Italiano (Italian), Polski (Polish), Português (Portuguese), Svenska (Swedish),اللغة العربية (Arabic), sprski (Serbian), Tiếng Việt (Vietnamese), Deutsch (German) and Nederlands (Dutch).
For each language track, we have one or more hosts to post the translated questions and respond to educators. As always, we’re super grateful to all current and former hosts who are collaborating closely to provide this service.
The #TweetMeetXX hashtags for non-English languages are to be used together with #MSFTEduChat so that everyone can find the conversations back in their own language. For example: French-speaking people use the combination #TweetMeetFR #MSFTEduChat. English-speaking educators may all use #MSFTEduChat on its own.
Post-event summary: Starting this month, we will publish a new post after each #MSFTEduChat event to summarize the key learnings from the conversations during the TweetMeet. The hosts will collaborate to curate a top selection of the tweets and trends they found most significant. For even more highlights from the TweetMeet, the blog post will offer multiple Twitter Moments – curated stories and conversations from Twitter.
TweetMeet fan? Show it off on your Twitter profile: Every month more people discover the unique nature of the TweetMeets and become passionate about them. Well, you can now show your passion for the TweetMeets right from your Twitter page. The dimensions of our Twitter Header Photo are 1500×500 – the perfect size for your Twitter profile. Get this month’s image here: #MSFTEduChat Twitter Header Photo.
Why join the #MSFTEduChat TweetMeets?
TweetMeets are monthly recurring Twitter conversations about themes relevant to educators, facilitated by Microsoft Education. The purpose of these events is to help professionals in education to learn from each other and inspire their students while they are preparing for their future. The TweetMeets also nurture personal learning networks among educators from across the globe.
We’re grateful to have a support group made up exclusively of former TweetMeet hosts, who volunteer to translate communication and check the quality of our questions and promotional materials. They also help identify the best candidates for future events, provide relevant resources, promote the events among their networks, and, in general, cheer everybody on.
From our monthly surveys we know that you may be in class at event time, busy doing other things or maybe even asleep – well, no problem! All educators are most welcome to join after the event. Simply take a look at the questions below and respond to these at a day and time that suit you best. You can also schedule your tweets in advance. In that case, be sure to quote the entire question and mention the hashtag #MSFTEduChat, so that everyone knows the right question and conversation to which you are responding.
How can I best prepare?
To prepare for the #MSFTEduChatTweetMeet, have a look at the questions we crafted this time.
Later in the month, join a training webinar on Monday, October 22 with VP of Education Anthony Salcito and Skype Master Teacher Stacey Ryan to learn how your class can participate and how you can organize your Skype-a-Thon activities from start to finish. REGISTER HERE.
If this time doesn’t work for you, please register so we can send you the on-demand video.
Our TweetMeet hosts have also assembled a great Flipgrid to help – and don’t forget that you can add your own video as well.
What does it take to be a global collaborator?
Why is global collaboration important for teaching and learning?
How do you embrace cultural differences and similarities?
Which resources and tools should be in any global collaborator’s toolkit?
Where will you be taking your students during Skype-a-Thon on Nov 13-14?
All hosts have been carefully recruited from across the globe based on their expertise in and passion for engaging their students in Skype-a-Thon, Skype in the Classroom, Mystery Skype and other global collaboration projects.
Special thanks to Francisco Texeira (@fcotexeira), who helps us coordinate the TweetMeet every month.
What are #MSFTEduChat TweetMeets?
Every month Microsoft Education organizes social events on Twitter targeted at educators globally. The hashtag we use is #MSFTEduChat. A team of topic specialists and international MIE Expert teachers prepare and host these TweetMeets together. Our team of educator hosts first crafts several questions around a certain topic. Then, before the event, they share these questions on social media. Combined with a range of resources, a blog post and background information about the events, this allows all participants to prepare themselves to the full. Afterwards we make an archive available of the most notable tweets and resources shared during the event.
Please connect with TweetMeet organizer Marjolein Hoekstra @OneNoteC on Twitter if you have any questions about TweetMeets or helping out as a host.
Next month’s topics: Computer Science and Hour of Code