Interactive demo to demonstrates the seamless control of computers. Interactive Demo
consists of one host app and one client app, and if you're on KDE there is a widget to controll it
Currently working on connecting a second laptop a windows laptop, I have ideas to implement USB capturing, Webcam capturing, stream to phone and smart TV's. I am using my setup daily and it's quite stable.
ScreenLink turns your laptops into extra monitors for your Linux desktop — and gives you remote control of each machine from a single keyboard and mouse. No special hardware, no dongles, no proprietary software. Just your existing computers on the same network.
You sit at your Linux desktop. Your MacBook to the right becomes your second screen. Your Windows laptop to the left becomes your third. Drag windows across all three. When you need to do something on the Mac itself — install an update, check a setting — you click one button and you're controlling the Mac from your Linux keyboard. Click again and you're back to extended screen mode.
Multi-monitor setups are great until you travel, work from home with different machines, or simply don't want to buy dedicated monitors. Most people already own a laptop or two collecting dust on the desk. ScreenLink makes them useful.
Existing solutions either cost money (Duet Display, Luna Display), require specific hardware (DisplayLink adapters), only work within one OS ecosystem (Sidecar for Mac-only, Miracast for Windows-only), or are clunky research projects that never quite work. Nothing existed that was free, cross-platform, and actually let you extend a Linux desktop to both macOS and Windows machines simultaneously.
The architecture is deceptively simple — it chains together proven open-source tools rather than reinventing screen capture and streaming from scratch.
The core insight came from a 10-year-old forum post: NVIDIA GPUs expose disconnected DisplayPort outputs that can be force-enabled via nvidia-settings. By setting a ConnectedMonitor option in the Xorg config and using ModeValidation to bypass EDID checks, Linux treats a phantom DP output as a real monitor. The desktop compositor (KDE Plasma) extends the desktop to it, windows can be dragged there, and everything behaves exactly like a physical second screen.
The content of this virtual display is captured by x0vncserver (from TigerVNC), which serves only the clipped region corresponding to the virtual monitor. A noVNC WebSocket proxy bridges this to the browser, and the client machine simply opens a full-screen browser tab pointing to the noVNC endpoint over HTTPS.
The result: the MacBook's browser becomes a second monitor. Drag a terminal window to the right edge of your Linux screen and it appears on the Mac. Fullscreen a video there and it plays. The latency over a wired LAN is low enough for productive work — not gaming, but code editors, dashboards, documentation, and video calls work well.
When you click "Remote Desktop" in the control widget, the system flips the direction. The Mac's browser closes, and a new browser instance launches on the Linux machine's virtual display. This browser connects to the Mac's built-in VNC server (Screen Sharing) through another noVNC proxy, with credentials passed automatically.
Since this browser window lives on the virtual display — the same one the Mac was previously showing as an extended screen — the Mac now displays its own desktop through the VNC chain. The Linux keyboard and mouse drive the VNC session, giving full control of the Mac without touching the MacBook's keyboard or trackpad.
Switching back is instant: the remote browser closes, the Mac's extended-screen browser reopens, and you're back to using the Mac as a monitor.
A Python WebSocket server acts as the control plane. A KDE Plasma widget in the system tray connects to it, showing connection status and mode toggle buttons. When the user clicks "Remote", the server:
- Signals the Mac browser to close (via WebSocket)
- Launches a browser on the Linux virtual display pointing to the Mac's VNC
- Positions and fullscreens it using
wmctrl
Switching back reverses the process. The same WebSocket server also powers a web-based control panel as a fallback.
All WebSocket and HTTP connections use TLS with certificates generated by mkcert, which installs a local CA trusted by both Linux and macOS browsers. The Mac VNC connection passes credentials through noVNC's URL parameters. No certificate warnings, no manual trust steps after initial setup.
- Virtual display: NVIDIA
ConnectedMonitor+ModeValidationXorg options - Screen capture:
x0vncserver(TigerVNC) with-geometryclipping - Streaming:
noVNC+websockifyover WSS - Client: Standard web browser in kiosk/fullscreen mode
- Control server: Python +
websocketslibrary - Window management:
wmctrlfor positioning browsers on virtual displays - Desktop integration: KDE Plasma 6 widget (QML + WebSocket)
- TLS:
mkcertfor locally-trusted certificates - Mac integration: SSH + AppleScript for browser lifecycle management
This was the hardest part by far. Creating a virtual monitor on Linux sounds like it should be simple — it isn't.
The first attempt used modprobe dummy to load a kernel module. This loads a network dummy, not a video one. Hours wasted.
The second attempt used xf86-video-dummy with an Xorg config file. The config replaced the primary GPU driver instead of adding a secondary one. Black screen on login. Had to recover from a TTY.
The third attempt used xrandr --setmonitor to create a logical monitor in the extended framebuffer. The X11 root window expanded to 3840 pixels wide, but KDE refused to extend the desktop there. Windows couldn't be dragged past the physical screen edge. The area existed in the framebuffer but was dead space.
The fourth attempt used xrandr --addmode on disconnected DisplayPort outputs. Failed with BadMatch because the NVIDIA proprietary driver doesn't expose outputs through standard xrandr.
The fifth attempt used nvidia-settings --assign CurrentMetaMode to force-enable a DP output. The command was accepted but xrandr still showed the output as disconnected.
The sixth attempt — the one that finally worked — combined NVIDIA's ConnectedMonitor option in an Xorg config with ModeValidation to bypass EDID checks. After a re-login, xrandr showed DP-0 connected with a full list of modes. KDE recognized it as a real second monitor. Windows could be dragged there. It just worked.
The project started on KDE Wayland, which is the default on Arch Linux. Screen capture under Wayland is fundamentally restricted — apps can't capture other apps without portal-based permission flows. x11vnc refused to start. xcap returned empty frames. Every screen capture tool assumed X11.
The solution was pragmatic: switch to KDE on X11. For a screen extender that needs continuous, low-latency capture of specific display regions, X11 is the right choice. Wayland's security model is designed to prevent exactly what this tool needs to do.
When implementing Remote Desktop mode, the first approach was to show the Mac's screen in a noVNC window on the Mac itself. The VNC server captures the screen, which includes the VNC viewer showing the screen, which includes the VNC viewer... recursive mirrors filling the display.
The solution was to put the VNC viewer on the Linux side instead, specifically on the virtual display. The Mac browser shows whatever is on the virtual display. The virtual display shows a browser connected to the Mac's VNC server. The Mac sees its own desktop — but through the Linux virtual display, not through a local viewer. No recursion because the VNC viewer isn't on the Mac's screen.
Self-signed certificates need to be manually trusted in every browser, on every port, on every machine. During development this meant clicking through "Advanced → Proceed" dozens of times per day, with browsers randomly forgetting the exception.
mkcert solved this permanently: one command creates a local Certificate Authority, installs it in the system trust store, and generates certificates signed by that CA. Both Linux and Mac browsers trust the certs immediately. No warnings, no manual steps, no expiry surprises.
Building ScreenLink was an exercise in discovering that the "simple" version of a problem is never the real problem.
Screen capture is solved. VNC is solved. WebSocket proxying is solved. Browser-based remote desktops are solved. But composing these solved pieces into a system that creates a virtual monitor, streams only that monitor's content, and seamlessly switches between extension and remote control modes — that required understanding X11 internals, GPU driver behavior, compositor window management, browser security models, and cross-platform SSH automation.
The final system has zero custom screen capture code, zero custom streaming protocols, and zero custom rendering. It's entirely an orchestration layer over existing tools. But knowing which tools to chain together, and how to make them cooperate — that was the actual engineering.