top of page

What is a Wireless Presentation System? Part 1.

At its core, a wireless presentation system provides a computer, tablet or smartphone the ability to share the contents of its screen with another display, without a physical video connection between the two. The very point and purpose of this system is to allow clients (users) to connect their device to the display without the restrictions and compatibility issues inherent in a cabled connection. For example, one MacBook Pro might have Mini-Display Port as its video interface, while another employs USB-C.

The 3 building-blocks of every screen sharing wireless presentation system; encode, transport & decode.

To deliver video from a client device to a display wirelessly, a wireless presentation system must do three things: 1) Package the video into transportable code (encode); 2) Deliver the code to the receiver (transport); and 3) Turn that code back into a video stream understood by the display (decode). Just how these three components are addressed will determine the system architecture and the consequent features and benefits of a complete system.

Developers and manufacturers have taken essentially three different approaches to building the current crop of wireless presentation systems. For the purpose of this discussion we can refer to them as 1) Hardware, 2) Hybrid and 3) Software; to describe how the encoding and decoding is performed. Most of the current crop of commercially available wireless presentation systems fall into one of these three architectural categories.

Hardware-based systems employ a hardware “transmitter” for point-to-point wireless video transmission between device and display.

Systems based on Hardware architecture employ hardware encoders and dedicated hardware decoders, with proprietary transport protocols. Hybrid systems use a software encoder running on the client device and a dedicated hardware decoder at the far end, usually relying on TCP/IP for signal transport. Software-based systems use software encoding and decoding, requiring apps to run on client and server devices, while transporting signals over the LAN/WLAN.

The Hardware system is characterized by the use of an adapter (commonly known as a “dongle”) that plugs into the client device. There are currently two types of such adapters – HDMI and USB. The HDMI adapter includes encoding and transmission capabilities on-board and effectively converts the digital HDMI stream into data that is broadcast to the hardware receiver. The receiver decodes the data into video and sends it to the display. This version of a Hardware-based WPS is the most client-agnostic and effectively operates like a “virtual” HDMI cable. It does however, require that the device have a compatible HDMI output.

The other type of Hardware system employs a USB connection to the client, which the OS sees as an external display. As such, the user of the client device can choose to mirror or extend their main display and even change resolution and other display settings. This is definitely more flexible than the HDMI adapter approach but is also dependent on a physical USB port on the client device.

It should be noted that Hardware-based WPS do not use the facility Wi-Fi or wired infrastructure for signal transport. Instead, they use proprietary RF links between encoder and decoder, which incidentally may or may not be based on Wi-Fi. This stand-alone feature of Hardware systems can be a key selling point for applications where access to the network is not possible or desirable.

Hybrid systems use the Wi-Fi radio already inside the device to transport the signal directly to the hardware receiver.

Equally popular are Hybrid systems where the encoder runs as an app on the client device and the decoder resides in dedicated hardware attached to a display. Software encoding is accomplished using the client device’s processor as opposed to a separate piece of hardware, eliminating the need for physical ports on the client device, but also increasing processing demands. This approach allows a more cohesive workflow between devices that have physical ports (computers) and those that do not (tablets, smartphones).

The encoding on Hybrid systems is addressed in two ways – either through native streaming protocols, such as AirPlay and Miracast, or through a vendor-specific encoder. Support for native protocols broadens the user base because it doesn’t require the user to install an app. On the other hand, app-based encoding often bundles additional features, such as password-restricted access and multi-user management. Some Hybrid systems support both native and app-based encoding in the same system at the same time, further broadening compatibility, while retaining features for specific use-cases.

A common deployment of Hybrid systems is to route all signals over the WLAN so users continue to have access to network resources in addition to the presentation system.

Hybrid systems use the existing network infrastructure to transport the encoded signal from the device to the decoder. Most, but not all, Hybrid systems offer three signal transport options, from stand-alone operation to full network integration. In general, the hardware component can integrate with the LAN over an Ethernet drop or the WLAN through a client session, making the device accessible to all network users. In addition, some Hybrid systems can act as an Access Point or a virtual AP, allowing users to make a Wi-Fi connection independent of the facility’s network.

Software-based systems require no specialized hardware, using software for the codec and the existing network for transport.

Lastly, Software-based systems do away with dedicated hardware altogether and replace it with a software decoder running natively on a PC. The client device runs a software encoder, either a vendor-specific app or a native streaming protocol and uses the existing network to transport the signal. This type of solution is available commercially as a system, or in the case of Windows 10, is built into the OS. These systems allow for WPS deployment without hardware acquisition, however, they do require dedication of a suitable host as the receiver device.

Let’s do a quick review of the three systems again, focusing on the benefits and challenges of each...

Hardware systems are fast and easy to deploy because they require no software installation or network integration and work with almost any supported OS version. They do however, require that the device have a physical USB (or HDMI) port; tablets, smartphones and PCs without physical ports require a software encoding solution.

Hybrid systems are hardware-agnostic and work on most devices, sometimes without installing an app. Plus, they are generally more cost-effective since there is no adapter (dongle) to connect to the client. However, since they use an existing wired or wireless network for transport, they can be more complicated to install, discover and connect with.

Software systems are easily deploy-able at scale because there is no new hardware introduced to the installation. As long as there’s an available host connected to the display, clients can connect with software encoding and the LAN that they’re already connected to. Their biggest drawback is that they require the dedicated use of a host, which can be costly or impractical.

Most of the current products in the wireless presentation system segment are built around one of the three system architectures described. Which solution is right for your (or your clients’) application depends on a variety of factors, including design complexity, ease of configuration, client usability, data security, device support and cost. Thoughtful selection of the right architecture first, before brand or model, will likely lead to a more positive outcome in the deployment of a wireless presentation system.

Read the second part of this article for more “hands-on” details about deployment, configuration and troubleshooting of wireless presentation systems.

A downloadable version of both parts of this article is available here:


Written by Costa Lakoumentas.

About the author: Costa has been part of the audio, video and multimedia industry since 1981 in roles that include system designer, integrator, consultant, product developer and manufacturer. He has been associated with several professional brands, developed over 130 products, holds 11 patents and is the founder and CEO of KLIK Communications.



bottom of page