top of page
  • KLIK Team

What is a Wireless Presentation System? Part 1.


At its core, a wireless presentation system provides a computer, tablet or smartphone the ability to share the contents of its screen with another display, without a physical video connection between the two. The very point and purpose of this system is to allow clients (users) to connect their device to the display without the restrictions and compatibility issues inherent in a cabled connection. For example, one MacBook Pro might have Mini-Display Port as its video interface, while another employs USB-C.


The 3 building-blocks of every screen sharing wireless presentation system; encode, transport & decode.

To deliver video from a client device to a display wirelessly, a wireless presentation system must do three things: 1) Package the video into transportable code (encode); 2) Deliver the code to the receiver (transport); and 3) Turn that code back into a video stream understood by the display (decode). Just how these three components are addressed will determine the system architecture and the consequent features and benefits of a complete system.

Developers and manufacturers have taken essentially three different approaches to building the current crop of wireless presentation systems. For the purpose of this discussion we can refer to them as 1) Hardware, 2) Hybrid and 3) Software; to describe how the encoding and decoding is performed. Most of the current crop of commercially available wireless presentation systems fall into one of these three architectural categories.

Hardware-based systems employ a hardware “transmitter” for point-to-point wireless video transmission between device and display.

Systems based on Hardware architecture employ hardware encoders and dedicated hardware decoders, with proprietary transport protocols. Hybrid systems use a software encoder running on the client device and a dedicated hardware decoder at the far end, usually relying on TCP/IP for signal transport. Software-based systems use software encoding and decoding, requiring apps to run on client and server devices, while transporting signals over the LAN/WLAN.


The Hardware system is characterized by the use of an adapter (commonly known as a “dongle”) that plugs into the client device. There are currently two types of such adapters – HDMI and USB. The HDMI adapter includes encoding and transmission capabilities on-board and effectively converts the digital HDMI stream into data that is broadcast to the hardware receiver. The receiver decodes the data into video and sends it to the display. This version of a Hardware-based WPS is the most client-agnostic and effectively operates like a “virtual” HDMI cable. It does however, require that the device have a compatible HDMI output.


The other type of Hardware system employs a USB connection to the client, which the OS sees as an external display. As such, the user of the client device can choose to mirror or extend their main display and even change resolution and other display settings. This is definitely more flexible than the HDMI adapter approach but is also dependent on a physical USB port on the client device.


It should be noted that Hardware-based WPS do not use the facility Wi-Fi or wired infrastructure for signal transport. Instead, they use proprietary RF links between encoder and decoder, which incidentally may or may not be based on Wi-Fi. This stand-alone feature of Hardware systems can be a key selling point for applications where access to the network is not possible or desirable.

Hybrid systems use the Wi-Fi radio already inside the device to transport the signal directly to the hardware receiver.

Equally popular are Hybrid systems where the encoder runs as an app on the client device and the decoder resides in dedicated hardware attached to a display. Software encoding is accomplished using the client device’s processor as opposed to a separate piece of hardware, eliminating the need for physical ports on the client device, but also increasing processing demands. This approach allows a more cohesive workflow between devices that have physical ports (computers) and those that do not (tablets, smartphones).


The encoding on Hybrid systems is addressed in two ways – either through native streaming protocols, such as AirPlay and Miracast, or through a vendor-specific encoder. Support for native protocols broadens the user base because it doesn’t require the user to install an app. On the other hand, app-based encoding often bundles additional features, such as password-restricted access and multi-user management. Some Hybrid systems support both native and app-based encoding in the same system at the same time, further broadening compatibility, while retaining features for specific use-cases.

A common deployment of Hybrid systems is to route all signals over the WLAN so users continue to have access to network resources in addition to the presentation system.

Hybrid systems use the existing network infrastructure to transport the encoded signal from the device to the decoder. Most, but not all, Hybrid systems offer three signal transport options, from stand-alone operation to full network integration. In general, the hardware component can integrate with the LAN over an Ethernet drop or the WLAN through a client session, making the device accessible to all network users. In addition, some Hybrid systems can act as an Access Point or a virtual AP, allowing users to make a Wi-Fi connection independent of the facility’s network.