Tag Archives: several

Community and Connection to Drive Change

Reflections on International Women’s Day and Women’s History Month

In recent weeks, I have had several individuals share with me their admiration for the amount of time I spend listening to, advocating for and simply being there for women. Of course I was humbled by what felt like a compliment, but hearing this gave me pause. Why did these individuals see my actions as deserving of admiration as opposed to a core way of how we show up for each other in the workplace, the industry and our lives in general? What path led me to this way of being, how might I expand my impact and how might I encourage others to take a more active role?

This way of being has been part of who I am for my entire working life. When I joined Microsoft full time in 1998, my first manager was a role model for me. Laurie Litwack spent time getting to know me personally as well as to understand my passion and hopes and what unique perspective I brought. She thoughtfully created my first assignment to both leverage my skills and challenge me. Laurie showed me not only what it meant to bring your authentic self to work but also how it felt to be supported. Under her leadership I not only grew in the technical aspects of my role, she also nurtured my appreciation for people. Looking back, this experience was unique, especially for that era in engineering where there were fewer women and even fewer women managers. It shaped my values as a leader and my view on how you best engage people and support their development. It showed me the importance of being present.

Early into my career the VP of our engineering organization, Bill Vegthe, brought a group of women employees together to better understand our experiences in the organization. He genuinely wanted to learn from us what the organization could be doing better to support our growth and satisfaction. At the time, the number of women in the organization was low and this forum was the first opportunity many of us had to meet and spend time with each other. The most valuable thing we learned from the experience was the personal support and enjoyment that came from simply making time for each other. The isolation we each felt melted away when we got to spend time with others like us: creating connections, sharing experiences, learning from each other. We grew more collectively than we ever would have individually, and I personally benefited from both the friendship and wisdom of many of the women in this community: Terrell Cox, Jimin Li, Anna Hester, Farzana Rahman, Deb MacFadden, Molly Brown, Linda Apsley, Betsy Speare. This was true many years ago when this community was created and holds true today even as this community has scaled from a handful of women to thousands of women across our Cloud + AI Division who make up this Women’s Leadership Community (WLC) under sponsorship from leaders such as Bob Muglia, Bill Laing, Brad Anderson and currently Scott Guthrie.

As I grew in my career, the importance of intentionally building connections with other women only became more clear. In the early 2010s as I joined the technical executive community, I looked around and felt a similar experience to my early career days. There were very few technical executives who were women, and we were spread across the organization, meaning we rarely had the opportunity to interact and in some cases had never met! It was out of desire to bring the WLC experience to this group that our Life Without Lines Community of technical women executives across Microsoft grew, based on the founding work of Michele Freed, Lili Cheng, Roz Ho, Rebecca Norlander. This group represents cross-company leadership and as the connections deepened, so did the impact on each other in terms of peer mentoring, career sponsorship and engineering and product collaboration.

Together we are more powerful than we are individually, amplifying each other’s voices.       

Although the concept of community might seem simple and obvious in the ongoing conversations about inclusion, the key in my experience is how the connections in these communities were built. This isn’t just about networking for the sake of networking; we come together with a focus on being generous with our time and our experiences, challenging each other and our organization to address issues in a new way, and with the space to be authentic within our own community by not feeling like we needed to be a monolith in our perspectives or priorities. We advocate for one another, we leverage our networks, we create space and we amplify voices of others. This community names the challenges these women face, names the hopes they have for themselves and future women in our industry, and names what is most important to our enjoyment of our work. My job, and the job of others leaders, is to then listen to these voices leveraging the insights to advocate for what is needed in the organization, and drive systemic changes that will create the best-lived experience for all women at Microsoft and in the industry. 

I have found that members of the community want to be heard, if you are willing to be present, willing to bring your authentic self and willing to take action on what you learn. I’m reflecting on this, in particular, as I think about International Women’s Day (IWD). From its beginnings in the early 1900s through to present day, IWD strives to recognize the need for active participation, equality and development of women and acknowledge the contribution of women globally.

This year I am reflecting on the need to ensure that our communities of women accurately represent the diverse range of perspectives and experiences of employees and customers. Making sure that even in a community about including others, we are not unintentionally excluding certain groups of women who may not have the same experiences or priorities, or privileges as others. It is a chance to reflect on how I can expand my impact. I challenge all of us to take this time to recognize those who are role models for us and those voices who may not be heard and determine what role each of us can play in achieving this goal for everyone.

Go to Original Article
Author: Microsoft News Center

How to install and test Windows Server 2019 IIS

Transcript – How to install and test Windows Server 2019 IIS

In this video, I want to show you how to install Internet Information Services, or IIS, and prepare it for use.

I’m logged into a domain-joined Windows Server 2019 machine and I’ve got the Server Manager open. To install IIS, click on Manage and choose the Add Roles and Features option. This launches the Add Roles and Features wizard. Click Next on the welcome screen and choose role-based or feature-based installation for the installation type and click Next.

Make sure that My Server is selected and click Next. I’m prompted to choose the roles that I want to deploy. We have an option for web server IIS. That’s the option I’m going to select. When I do that, I’m prompted to install some dependency features, so I’m going to click on Add Features and I’ll click Next.

I’m taken to the features screen. All the dependency features that I need are already being installed, so I don’t need to select anything else. I’ll click Next, Next again, Next again on the Role Services — although if you do need to install any additional role services to service the IIS role, this is where you would do it. You can always enable these features later on, so I’ll go ahead and click Next.

I’m taken to the Confirmation screen and I can review my configuration selections. Everything looks good here, so I’ll click install and IIS is being installed.

Testing Windows Server 2019 IIS

The next thing that I want to do is test IIS to make sure that it’s functional. I’m going to go ahead and close this out and then go to local server. I’m going to go to IE Enhanced Security Configuration. I’m temporarily going to turn this off just so that I can test IIS. I’ll click OK and I’ll close Server Manager.

The next thing that I want to do is find this machine’s IP address, so I’m going to right-click on the Start button and go to Run and type CMD to open a command prompt window, and then from there, I’m going to type ipconfig.

Here I have the server’s IP address, so now I can open up an Internet Explorer window and enter this IP address and Internet Information Services should respond. I’ve entered the IP address, then I press enter and I’m taken to the Internet Information Services screen. IIS is working at this point.

I’ll go ahead and close this out. If this were a real-world deployment, one of the next things that you would probably want to do is begin uploading some of the content that you’re going to use on your website so that you can begin testing it on this server.

I’ll go ahead and open up file explorer and I’ll go to this PC, driver and inetpub folder and the wwwroot subfolder. This is where you would copy all of your files for your website. You can configure IIS to use a different folder, but this is the one used by default for IIS content. You can see the files right here that make up the page that you saw a moment ago.

How to work with the Windows Server 2019 IIS bindings

Let’s take a look at a couple of the configuration options for IIS. I’m going to go ahead and open up Server Manager and what I’m going to do now is click on Tools, and then I’m going to choose the Internet Information Services (IIS) Manager. The main thing that I wanted to show you within the IIS Manager is the bindings section. The bindings allow traffic to be directed to a specific website, so you can see that, right now, we’re looking at the start page and, right here, is a listing for my IIS server.

I’m going to go ahead and expand this out and I’m going to expand the site’s container and, here, you can see the default website. This is the site that I’ve shown you just a moment ago, and then if we look over here on the Actions menu, you can see that we have a link for Bindings. When I open up the Bindings option, you can see by default we’re binding all HTTP traffic to port 80 on all IP addresses for the server.

We can edit [the site bindings] if I select [the site] and click on it. You can see that we can select a specific IP address. If the server had multiple IP addresses associated with it, we could link a different IP address to each site. We could also change the port that’s associated with a particular website. For example, if I wanted to bind this particular website to port 8080, I could do that by changing the port number. Generally, you want HTTP traffic to flow on port 80. The other thing that you can do here is to assign a hostname to the site, for example www.contoso.com or something to that effect.

The other thing that I want to show you in here is how to associate HTTPS traffic with a site. Typically, you’re going to have to have a certificate to make that happen, but assuming that that’s already in place, you click on Add and then you would change the type to HTTPS and then you can choose an IP address; you can enter a hostname; and then you would select your SSL certificate for the site.

You’ll notice that the port number is set to 443, which is the default port that’s normally used for HTTPS traffic. So, that’s how you install IIS and how you configure the bindings for a website.

+ Show Transcript

Go to Original Article
Author:

4 SD-WAN vendors integrate with AWS Transit Gateway

Several software-defined WAN vendors have announced integration with Amazon Web Services’ Transit Gateway. For SD-WAN users, the integrations promise simplified management of policies governing connectivity among private data centers, branch offices and AWS virtual networks.

Stitching together workloads across cloud and corporate networks is complex and challenging. AWS tackles the problem by making AWS Transit Gateway the central router of all traffic emanating from connected networks.

Cisco, Citrix Systems, Silver Peak and Aruba, a Hewlett Packard Enterprise Company, launched integrations with the gateway this week. The announcements came after AWS unveiled the AWS Transit Gateway at its re:Invent conference in Las Vegas.

SD-WAN vendors lining up quickly to support the latest AWS integration tool didn’t surprise analysts. “The ease and speed of integration with leading IaaS platforms are key competitive issues for SD-WAN for 2020,” said Lee Doyle, the principal analyst for Doyle Research.

By acting as the network hub, Transit Gateway reduces operational costs by simplifying network management, according to AWS. Before the new service, companies had to make individual connections between networks outside of AWS and those serving applications inside the cloud provider.

The potential benefits of Transit Gateway made connecting to it a must-have for SD-WAN suppliers. However, tech buyers should pay close attention to how each vendor configures its integration.

“SD-WAN vendors have different ways of doing things, and that leads to some solutions being better than others,” Doyle said.

What the 4 vendors are offering

Cisco said its integration would let IT teams use the company’s vManage SD-WAN controller to administer connectivity from branch offices to AWS. As a result, engineers will be able to apply network segmentation and data security policies universally through the Transit Gateway.

Aruba will let customers monitor and manage connectivity either through the Transit Gateway or Aruba Central. The latter is a cloud-based console used to control an Aruba-powered wireless LAN.

Silver Peak is providing integration between the Unity EdgeConnect SD-WAN platform and Transit Gateway. The link will make the latter the central control point for connectivity.

Finally, Citrix’s Transit Gateway integration would let its SD-WAN orchestration service connect branch offices and data centers to AWS. The connections will be particularly helpful to organizations running Citrix’s virtual desktops and associated apps on AWS.

Go to Original Article
Author:

Google cloud network tools check links, firewalls, packet loss

Google has introduced several network monitoring tools to help companies pinpoint problems that could impact applications running on the Google Cloud Platform.

Google launched this week the first four modules of an online console called the Network Intelligence Center. The components for monitoring a Google cloud network include a network topology map, connectivity tests, a performance dashboard, and firewall metrics and insights. The first two are in beta, and the rest are in alpha, which means they are still in the early stages of development.

Here’s a brief overview of each module, based on a Google blog post:

— Google is providing Google Cloud Platform (GCP) subscribers with a graphical view of their network topology. The visualization shows how traffic is flowing between private data centers, load balancers, and applications running on computing environments within GCP. Companies can drill down on each element of the topology map to verify policies or identify and troubleshoot problems. They can also review changes in the network over the last six weeks.

— The testing module lets companies diagnose problems with network connections within GCP or from GCP to an IP address in a private data center or another cloud provider. Along with checking links, companies can test the impact of network configuration changes to reduce the chance of an outage.

–The performance dashboard provides a current view of packet loss and latency between applications running on virtual machines. Google said the tool would help IT teams determine quickly whether a packet problem is in the network or an app.

–The firewall metrics component offers a view of rules that govern the security software. The module is designed to help companies optimize the use of firewalls in a Google cloud network.

Getting access to the performance dashboard and firewall metrics requires a GCP subscriber to sign up as an alpha customer. Google will incorporate the tools into the Network Intelligence Center once they reach the beta level.

Go to Original Article
Author:

DreamWorks animates its cloud with NetApp Data Fabric

Although it’s only been around since 2001, DreamWorks Animation has several blockbuster movies to its credit, including How to Train Your Dragon, Kung Fu Panda, Madagascar and Shrek. To get the finished product ready for the big screen, digital animators at the Hollywood, Calif., studio share huge data sets across the internal cloud, built around NetApp Data Fabric and its other storage technologies.

An average film project takes several years to complete and involves up to tens of thousands of data sets. At each stage of production, different animation teams access the content to add to or otherwise enhance the digital images, with the cloud providing the connective tissue. The “lather, rinse, repeat” process occurs up to 600 times per frame, said Skottie Miller, a technology fellow at the Los Angeles-area studio.

“We don’t make films — we create data. Technology is our paintbrush. File services and storage is our factory floor,” Miller told an audience of NetApp users recently.

‘Clouds aren’t cheap’

DreamWorks has a mature in-house cloud that has evolved over the years. In addition to NetApp file storage, the DreamWorks cloud incorporates storage kits from Hewlett Packard Enterprise (HPE). The production house runs the Qumulo Core file system on HPE Apollo servers and uses HPE Synergy composable infrastructure for burst compute, networks and storage.

Miller said DreamWorks views its internal cloud as a “lifestyle, a way to imagine infrastructure” that can adapt to rapidly changing workflows.

“Clouds aren’t magic and they’re not cheap. What they are is capable and agile,” Miller said. “One of the things we did was to start acting like a cloud by realizing what the cloud is good at: [being] API-driven and providing agile resources on a self-service basis.”

We don’t make films — we create data. Technology is our paintbrush. File services and storage is our factory floor.
Skottie MillerTechnology fellow, DreamWorks Animation

DreamWorks set up an overarching virtual studio environment that provides production storage, analytics on the storage fabric and automated processes. The studio deploys NetApp All-Flash FAS to serve hot data and NetApp FlexCache for horizontal scale-out across geographies, especially for small files.

The DreamWorks cloud relies on various components of the NetApp Data Fabric. NetApp FAS manages creative workflows. NetApp E-Series block storage is used for postproduction. NetApp HCI storage (based on SolidFire all-flash arrays) serves Kubernetes clusters and a virtual machine environment.

To retire tape backups, DreamWorks added NetApp StorageGrid as back-end object storage with NetApp FabricPool tiering for cold data. The company uses NetApp SnapMirror to get consistent point-in-time snapshots. Along with StorageGrid, Miller said DreamWorks has adopted NetApp Data Availability Services (NDAS) to manage OnTap file storage across hybrid clouds.

“NDAS has an interesting characteristic. Imagine a cloud thumb drive with a couple hundred terabytes. You can use it to guard against cyberattacks or environmental disasters, or to share data sets with somebody else,” Miller said.

The need for storage analytics

The sheer size of the DreamWorks cloud — a 20 PB environment with more than 10,000 pieces of media — underscored the necessity for deep storage analytics, Miller said.

“We rely on OnTap automation for our day-to-day provisioning and for quality of service,” he said.

In addition to being a NetApp customer, DreamWorks and NetApp have partnered to further development of Data Fabric innovations.

A DreamWorks cloud controller helps inform development of NetApp Data Fabric under a co-engineering agreement. The cloud software invokes APIs in NetApp Kubernetes Service.

The vendor and customer have joined forces to build an OnTap analytics hub that streams telemetry data in real time to pinpoint anomalies and automatically open service tickets. DreamWorks relies on open source tools that it connects to OnTap using NetApp APIs.

Go to Original Article
Author:

New telephony controls coming to Microsoft Teams admin center

Microsoft will add several telephony controls to the Microsoft Teams admin center in the coming months, a significant move in the vendor’s campaign to retire Skype for Business Online by mid-2021.

Admins will be able to build, test and manage custom dial plans through the Teams portal. Additionally, organizations that use Microsoft Calling Plan will be able to create and assign phone numbers and designate emergency addresses for users.

Currently, admins can only perform those tasks in Teams through the legacy admin center for Skype for Business Online. Microsoft has been gradually moving controls to the Teams admin center, with telephony controls among the last to switch over.

Microsoft plans to begin adding the new telephony controls to the Teams admin center in November, according to the vendor’s Office 365 Roadmap webpage. The company will also introduce some advanced features it didn’t support in Skype for Business Online, a cloud-based app within Office 365.

The update will let admins configure what’s known as dynamic emergency calling. The feature — supported only in the on-premises version of Skype for Business — automatically detects a user’s location when they place a 911 call. It then transmits that information to emergency officials.

The admin center for Skype for Business Online is “fairly rudimentary,” said Tom Arbuthnot, principal solutions architect at Modality Systems, a Microsoft-focused systems integrator. The new console for Teams provides advancements like the ability to sort and filter users and phone numbers.

“All of these little features add up to making a more friendly voice platform for an administrator,” Arbuthnot said. “They are getting closer and closer to everything being administered in the Teams admin center.”

Microsoft Teams still missing advanced calling controls, features

The superior design of the admin center notwithstanding, Teams still lacks crucial tools for organizations too large to use the management console.

For those enterprises, Teams PowerShell is the go-to tool for auto-configuring settings on a large scale using code-based commands. However, PowerShell cannot do everything that the Teams admin center can do. Microsoft has also yet to release APIs that would allow a third-party consultant to help manage a Fortune 500 company’s transition to Teams calling.

“When you’re up to hundreds of thousands of seats, you don’t really want to be going to an admin center and manually administrating,” Arbuthnot said. “The PowerShell and APIs tend to lag a little bit.”

A lack of parity between the telephony features of Skype for Business and Teams had been one of the biggest roadblocks preventing organizations from fully transitioning from the old to the new platform.

But at this point, Teams should be suitable for everyone except those with the most complex needs, such as receptionists, Arbuthnot said.

Other features that Microsoft is planning include compliance call recording, virtual desktop infrastructure support and contact center integrations.

Go to Original Article
Author:

What are the Azure Stack HCI deployment, management options?

There are several management approaches and deployment options for organizations interested in using the Azure Stack HCI product.

Azure Stack HCI is a hyper-converged infrastructure product, similar to other offerings in which each node holds processors, memory, storage and networking components. Third-party vendors sell the nodes that can scale should the organization need more resources. A purchase of Azure Stack HCI includes the hardware, Windows Server 2019 operating system, management tools, and service and support from the hardware vendor. At time of publication, Microsoft’s Azure Stack HCI catalog lists more than 150 offerings from 19 vendors.

Azure Stack HCI, not to be confused with Azure Stack, gives IT pros full administrator rights to manage the system.

Tailor the Azure Stack HCI options for different needs

The basic components of an Azure Stack HCI node might be the same, but an organization can customize them for different needs, such as better performance or lowest price. For example, a company that wants to deploy a node in a remote office/branch office might select Lenovo’s ThinkAgile MX Certified Node, or its SR650 model. The SR650 scales to two nodes that can be configured with a variety of processors offering up to 28 cores, up to 1.5 TB of memory, hard drive combinations providing up to 12 TB (or SSDs offering more than 3.8 TB), and networking with 10/25 GbE. Each node comes in a 2U physical form factor.

If the organization needs the node for more demanding workloads, one option is the Fujitsu Primeflex. Azure Stack HCI node models such as the all-SSD Fujitsu Primergy RX2540 M5 scale to 16 nodes. Each node can range from 16 to 56 processor cores, up to 3 TB of SSD storage and 25 GbE networking.

Management tools for Azure Stack HCI systems

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

Microsoft positions the Windows Admin Center (WAC) as the ideal GUI management tool for Azure Stack HCI, but other familiar utilities will work on the platform.

The Windows Admin Center is a relatively new browser-based tool for consolidated management for local and remote servers. The Windows Admin Center provides a wide array of management capabilities, such as managing Hyper-V VMs and virtual switches, along with failover and hyper-converged cluster management. While it is tailored for Windows Server 2019 — the server OS used for Azure Stack HCI — it fully supports Windows Server 2012/2012 R2 and Windows Server 2016, and offers some functionality for Windows Server 2008 R2.

Azure Stack HCI users can also use more established management tools such as System Center. The System Center suite components handle infrastructure provisioning, monitoring, automation, backup and IT service management. System Center Virtual Machine Manager provisions and manages the resources to create and deploy VMs, and handle private clouds. System Center Operations Manager monitors services, devices and operations throughout the infrastructure.

Other tools are also available including PowerShell, both the Windows and the PowerShell Core open source versions, as well as third-party products, such as 5nine Manager for Windows Server 2019 Hyper-V management, monitoring and capacity planning.

It’s important to check over each management tool to evaluate its compatibility with the Azure Stack HCI platform, as well as other components of the enterprise infrastructure.

Go to Original Article
Author:

Navy sails SAP ERP systems to AWS GovCloud

The U.S. Navy has moved several SAP and other ERP systems from on premises to AWS GovCloud, a public cloud service designed to meet the regulatory and compliance requirements of U.S. government agencies.

The project entailed migrating 26 ERPs across 15 landscapes that were set up around 60,000 users across the globe. The Navy tapped SAP National Security Services Inc. (NS2) for the migration. NS2 was spun out of SAP specifically to sell SAP systems that adhere to the highly regulated conditions that U.S. government agencies operate under.

Approximately half of the systems that moved to AWS GovCloud were SAP ERP systems running on Oracle databases, according to Harish Luthra, president of NS2 secure cloud business. SAP systems were also migrated to the SAP HANA database, while non-SAP systems remain on their respective databases.

Architecture simplification and reducing TCO

The Navy wanted to move the ERP systems to take advantage of the new technologies that are more suited for cloud deployments, as well as to simplify the underlying ERP architecture and to reduce the total cost of ownership (TCO), Luthra said.

The migration enabled the Navy to reduce the data size from 80 TB to 28 TB after the migration was completed.

Harish LuthraHarish Luthra

“Part of it was done through archiving, part was disk compression, so the cost of even the data itself is reducing quite a bit,” Luthra said. “On the AWS GovCloud side, we’re using one of the largest instances — 12 terabytes — and will be moving to a 24 terabyte instance working with AWS.”

The Navy also added applications to consolidate financial systems and improve data management and analytics functionality.

“We added one application called the Universe of Transactions, based on SAP Analytics that allows the Navy to provide a consolidated financial statement between Navy ERP and their other ERPs,” Luthra said. “This is all new and didn’t exist before on-premises and was only possible to add because we now have HANA, which enables a very fast processing of analytics. It’s a giant amount of transactions that we are able to crunch and produce a consolidated ledger.”

Joe GioffreJoe Gioffre

Accelerated timeline

The project was done at an accelerated pace that had to be sped up even more when the Navy altered its requirements, according to Joe Gioffre, SAP NS2 project principal consultant. The original go-live date was scheduled for May 2020, almost two years to the day when the project began. However, when the Navy tried to move a command working capital fund onto the on-premises ERP system, it discovered the system could not handle the additional data volume and workload.

This drove the HANA cloud migration go-live date to August 2019 to meet the fiscal new year start of Oct. 1, 2019, so the fund could be included.

“We went into a re-planning effort, drew up a new milestone plan, set up Navy staffing and NS2 staffing to the new plan so that we could hit all of the dates one by one and get to August 2019,” Gioffre said. “That was a colossal effort in re-planning and re-resourcing for both us and the Navy, and then tracking it to make sure we stayed on target with each date in that plan.”

It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations.
Joshua GreenbaumPrincipal, Enterprise Applications Consulting

Governance keeps project on track

Tight governance over the project was the key to completing it in the accelerated timeframe.

“We had a very detailed project plan with a lot of moving parts and we tracked everything in that project plan. If something started to fall behind, we identified it early and created a mitigation for it,” Gioffre explained. “If you have a plan that tracks to this level of detail and you fall behind, unless you have the right level of governance, you can’t execute mitigation quickly enough.”

The consolidation of the various ERPs onto one SAP HANA system was a main goal of the initiative, and it now sets up the Navy to take advantage of next-generation technology.

“The next step is planning a move to SAP S/4HANA and gaining process improvements as we go to that system,” he said.

Proving confidence in the public cloud

It’s not a particular revelation that public cloud hyperscaler storage providers like AWS GovCloud can handle huge government workloads, but it is notable that the Department of Defense is confident in going to the cloud, according to analyst Joshua Greenbaum, principal at Enterprise Applications Consulting, a firm based in Berkeley, Calif.

“The glitches that happened with Amazon recently and [the breach of customer data from Capital One] highlight the fact that we have a long way to go across the board in perfecting the cloud model,” Greenbaum said. “But I think that SAP and its competitors have really proven that stuff does work on AWS, Azure and, to a lesser extent, Google Cloud Platform. They have really settled in as legitimate strategic platforms and are now just getting the bugs out of the system.”

Greenbaum is skeptical that the project was “easy,” but it would be quite an accomplishment if it was done relatively painlessly.

“Every time you tell me it was easy and simple and painless, I think that you’re not telling me the whole story because it’s always going to be hard,” he said. “And these are government systems, so they’re not trivial and simple stuff. But this may show us that if the will is there and the technology is there, you can do it. It’s not as hard as landing on the moon, but you’re still entering orbital space when you are going to these cloud implementations, so it’s always going to be hard.”

Go to Original Article
Author:

New Skype features boost your productivity and enrich your chat experience | Skype Blogs

We recently introduced several features that help you boost your productivity when sending messages in Skype* and enrich your overall chat experience. New features include draft messages, the ability to bookmark messages and preview media and files before sending, as well as a new approach to display multiple photos or videos. We also launched split window, so you never mix up conversations again!

Message drafts

Now you’ll never forget about messages that didn’t get sent. Any message that you typed, but didn’t send, is saved in the corresponding conversation and marked with the [draft] tag—so you can easily recognize, finish, and send it later. Messages saved as drafts are even available when you leave and come back to your Skype app.

Message bookmarks

You can now bookmark any message in Skype—whether it’s work related or family photos—and come back to it with one click or tap anytime! Just right click or long press the message and click or tap Add bookmark. The message is added to the Bookmarks screen and is saved with your other bookmarked messages.

Preview media and files before sending

You can now preview photos, videos, and files that you’ve selected to share before sending. Once you select media and files to share, they’re displayed in the message panel, so you can ensure they’re the ones you want to share with your contact. You can also remove ones added by mistake or add new ones right from the panel. In addition, should you want to write an explanation or description for what you’re sending, you can add a message that will be sent along with the files.

New approach for displaying multiple photos or videos sent at once

If you want to share a bunch of photos with your friends or family after great vacation or nice eventjust do it and Skype will make sure they’re nicely presented in a conversation. You’ll see a nice album in the chat history with all the photos combined. And you can see each one by navigating and clicking between the photos or videos in an album.

Never mix up conversations in Skype again with split window

A few months back, we announced the launch of split window for Windows 10, which lets you put your contact list in one window, and each conversation you open in separate windows. We’re pleased to say that this feature is now available for all versions of Windows, Mac, and Linux on the latest version of Skype.* To learn more about how to use the split window view, visit our FAQs.

Let us know what you think

At Skype we’re driven by the opportunity to connect our global community of hundreds of millions, empowering them to feel closer and achieve more together. As we pursue these goals, we’re always looking for new ways to enhance the experience and improve quality and reliability. We listen to your feedback and are wholly committed to improving the Skype experience based on what you tell us. We’re passionate about bringing you closer to the people in your life—so if we can do that better, please let us know.

*These new features are available on the latest version of Skype across all platforms, except for split window, which is currently only available on desktop.

Go to Original Article
Author: Microsoft News Center

For Sale – DDR3 Laptop and Desktop Memory, various sizes

Hi guys
I have several sets of laptop and one set of desktop memory for sale.
Delivery will be by royal mail.
Would prefer PayPal gift, BT is possible as well.

Lot #1
Kingston KTH-X3CL laptop DDR3 16GB (2x8GB) 1.35V PC3-12800 (1600)
Price: 105 £

This was bought for the laptop upgrade that never happened. One stick is sealed and another is open, but never used.
Spec is here:
Kingston HP KTH-X3CL/8G 8GB DDR3L 1600Mhz Non ECC Memory RAM SODIMM
Delivery by RM Special Delivery Guaranteed by 1pm® (up to 500£ compensation)

Lot #2
Corsair CMSO16GX3M2C1600C11 Value Select laptop DDR3 16GB (2x8GB) 1.35V PC3-12800 (1600)
Fully tested and think it was not used much.
Price: 95 £

Spec is here:
Corsair Memory — 16GB DDR3L SODIMM Memory
Delivery by RM Special Delivery Guaranteed by 1pm® (up to 500£ compensation)

Lot #3
Samsung M471B5173QH0-YK0 laptop DDR3 8GB (2x4GB) 1.35V (1.5V) PC3L-12800 (1600)
Fully tested
Price: 28 £

Spec is here:
M471B5173QH0-YK0 | SODIMM | Samsung Module | Semiconductor
Delivery by RM Signed For 1st Class (up to 50£ compensation)

Lot #4
Samsung laptop DDR3 8GB (2x4GB) 1.35V (1.5V) PC3L-12800 (1600)
One module is M471B5173QH0-YK0 and another M471B5173CHO-YKO
Fully tested
Price: 28 £
Delivery by RM Signed For 1st Class (up to 50£ compensation)

Lot #5
Desktop DDR3 16GB (4x4GB) PC3-12800U (1600)
Fully tested, was in my desktop. Upgraded to 32GB of RAM and have no use for them no more.
Not sure about the voltage, but all 4 worked fine in HP desktop
Price: 55 £

Two Elpida (EDJ4208EFBG-GN-F) modules, apparently made by Micron
Spec is here:
EDJ4208EFBG-GN-F
and two Hynix (HMT351U6EFR8C) modules.
Spec is here:
HMT351U6EFR8C < PRODUCTS < SK Hynix
Delivery by RM Signed For 1st Class (up to 50£ compensation)

Price and currency: outlined in the post
Delivery: Delivery cost is included within my country
Payment method: PayPal Gift or BT
Location: Manchester
Advertised elsewhere?: Not advertised elsewhere
Prefer goods collected?: I have no preference

______________________________________________________
This message is automatically inserted in all classifieds forum threads.
By replying to this thread you agree to abide by the trading rules detailed here.
Please be advised, all buyers and sellers should satisfy themselves that the other party is genuine by providing the following via private conversation to each other after negotiations are complete and prior to dispatching goods and making payment:

  • Landline telephone number. Make a call to check out the area code and number are correct, too
  • Name and address including postcode
  • Valid e-mail address

DO NOT proceed with a deal until you are completely satisfied with all details being correct. It’s in your best interest to check out these details yourself.