
Search results for 'Technology' - Page: 5
| PC World - 19 Sep (PC World)At a glanceExpert`s Rating
Pros
Attractive design with compact stand
Good range of video, USB-C, USB-A connectivity
High SDR and HDR brightness
Outstanding motion clarity at 1080p/330Hz
Cons
USB-C only supports 15 watts of power delivery
Extremely glossy display finish
Only 165Hz refresh rate at 4K
Our Verdict
The Asus ROG Strix OLED XG32UCWG provides great motion clarity with solid brightness for an OLED panel, and the price is right.
Price When Reviewed
This value will show the geolocated pricing text for product undefined
Best Pricing Today
Best Prices Today: Asus ROG Strix OLED XG32UCWG
Retailer
Price
Check
Price comparison from over 24,000 stores worldwide
Product
Price
Price comparison from Backmarket
Best Prices Today: Check today’s prices
Want a monitor with great motion clarity, OLED image quality, and a contrast-rich finish that can also double as a mirror when the monitor is turned off? The Asus ROG Strix OLED XG32UCWG might be for you.
It goes all-in on gaming with a dual-mode display that can refresh at up to 330Hz and a TrueBlack Glossy finish that enhances immersion. Those benefits come with downsides but, for many, the pros and cons will net out to be positive.
Read on to learn more, then see our roundup of the best gaming monitors for comparison.
Asus ROG Strix OLED XG32UCWG specs and features
At its core, the Asus ROG Strix OLED XG32UCWG is another 32-inch 4K monitor, but there are a few interesting details on the spec sheet. It uses LG’s WOLED panel, which is a bit less common than Samsung’s QD-OLED. On top of that, it’s a dual-mode display, meaning it offers both 4K and 1080p native resolution modes. In 4K, the refresh rate goes up to 165Hz, but in 1080p it can reach 330Hz.
The monitor also has what Asus calls a TrueBlack Glossy display coat, which allegedly improves perceived contrast. More on that later in the review.
Display size: 31.5-inch 16:9 aspect ratio
Native resolution: 3840×2160 / 1920×1080
Panel type: WOLED
Refresh rate: 165Hz / 330Hz (in 1080p mode)
Adaptive sync: Yes, AMD FreeSync Premium Pro & G-Sync Compatible
HDR: Yes, HDR10, VESA DisplayHDR 400 True Black
Ports: 2x HDMI 2.1, 1x DisplayPort 1.4, 1x USB-C upstream with DisplayPort Alternate Mode and 15 watts of power delivery, 1x 3.5mm headphone jack, 1x USB-B 3.2 Gen 1 upstream, 3x USB-A 3.2 Gen 1 downstream
Audio: None
Additional features: Proximity sensor, dual-mode display
Price: $999 MSRP / $899 initial retail
The monitor also provides a fair bit of connectivity, including USB-C with DisplayPort and three downstream USB-A ports. That means it works well as a USB hub. There’s also a proximity sensor—a new feature starting to appear in some OLED monitors—meant to reduce image retention by automatically turning off the display when you move away.
Asus ROG Strix OLED XG32UCWG design
The design of the Asus ROG Strix OLED XG32UCWG is quite reserved from the front, with slim black bezels on all sides. The only notable distinction is the glowing red ROG logo on the bottom bezel, which also houses the proximity sensor.
Flip it around and the monitor looks a bit more distinctive, with a large RGB-lit Asus ROG logo and the visually interesting two-tone black look common to many ROG monitors. It’s clearly a gaming monitor, but it leans toward the more subtle end of typical gaming monitor design.
Matthew Smith / Foundry
Looks aside, the design is practical. The monitor ships with an ergonomic stand that has an extremely small base. Asus highlights this as a feature, and for good reason, as the small base makes it easier to position the monitor on your desk and minimize its footprint.
The stand supports tilt, swivel, and height adjustment, though its range is a bit limited in some areas. For example, it adjusts only 80mm for height, while some competitors offer 110mm or, in the best case, 130mm. Still, 80mm is fine for most setups. The stand doesn’t support pivoting into portrait orientation and instead can pivot just a few degrees for minor adjustments, though that’s not too unusual for a 32-inch OLED monitor.
Of course, the monitor also provides a 100x100mm VESA mount, so you can attach it to third-party monitor arms or stands to increase its range of adjustment.
Like most OLED monitors, the XG32UCWG uses an external power brick, so you’ll need to place that under your desk. It’s a small brick as these things go, though, and rated at 240 watts.
Asus ROG Strix OLED XG32UCWG connectivity
Past Asus ROG monitors haven’t always stood out for connectivity, but the ROG Strix OLED XG32UCWG offers a good range of options. The video inputs include two HDMI 2.1 ports, one DisplayPort 1.4, and a USB-C port with DisplayPort Alternate Mode, for a total of four inputs. That’s a bit more than the typical three video inputs.
The USB-C port isn’t a complete win, as it only provides power delivery up to 15 watts, which won’t be enough to handle a connected laptop (unless it’s a MacBook Air, maybe, if you’re not running at full load). However, the USB-C port does provide upstream access to three downstream USB-A ports, which is useful. Those USB-A ports can also be accessed through an upstream USB-B connection if you’re using a desktop, in which case you likely won’t be using USB-C.
A few competitors provide better overall connectivity, such as the HP Omen Transcend 32. On the other hand, some rivals like Alienware have recently offered fewer ports, and the Asus is less expensive than the HP. The XG32UCWG’s connectivity is a middle ground for people who want decent connectivity without paying too much for it.
Asus ROG Strix OLED XG32UCWG menus, features, and audio
The Asus ROG Strix OLED XG32UCWG’s menu system is controlled by a joystick hidden behind the ROG logo on the bottom bezel. The menu is easy to navigate thanks to clearly labeled options and decently sized text. Alternatively, you can use Asus’ DisplayWidget Center to control monitor settings directly in Windows or macOS. It’s a great option for making quick adjustments and mostly makes the joystick unnecessary—unless you simply prefer to use it.
Asus also provides a few interesting features that might sway some shoppers. The monitor offers significant aspect ratio controls, letting the 32-inch panel behave like a 24.5-inch or 27-inch display. Most people will stick with the default settings—a 32-inch display area, 4K resolution, and 165Hz refresh rate—but you could also run it as a 24-inch, 330Hz display for certain esports titles. There’s an OLED anti-flicker mode that can reduce flickering, which OLEDs sometimes exhibit, especially when displaying certain grayscale tones.
Matthew Smith / Foundry
Of course, you also get the usual gaming extras like FPS counters and crosshairs. Asus has added some AI branding here, calling it an AI assistant, which means certain features are dynamic. For example, Dynamic Shadow Boost can automatically brighten dark areas of a scene to make enemies easier to spot, without affecting brighter areas. Personally, I rarely use these features, but I can see how they might be helpful if you often rely on crosshairs or shadow boosting for a competitive edge.
Asus is also all-in on screen protection features to prevent OLED burn-in. The monitor has a proximity sensor to automatically turn off the screen when you move away from the monitor, then turn it back on when you return. There’s also a wide range of features that automatically detect scenarios that might cause burn-in, like a bright logo on a dark image (or vice versa), and attempt to compensate. I can’t comment on how effective these features will be long-term, since I only had the monitor for a couple of weeks, but I expect the proximity sensor, at the least, will be helpful.
On top of all this, the monitor provides a good range of image quality adjustment. It includes gamma and color temperature modes that target precise values, not vague presets, plus color calibration. Though not sold as a monitor for creative professionals, it could work in a creative capacity for many people.
Speakers, on the other hand, are absent. That’s a tad disappointing, but it’s common among OLED gaming monitors, as monitor makers typically assume gamers will want to use a headset.
Asus is also all-in on screen protection features to prevent OLED burn-in.
Asus’ ROG Strix OLED XG32UCWG SDR image quality
The Asus ROG Strix OLED XG32UCWG has an LG WOLED panel. This contrasts to the more common Samsung QD-OLED panel. WOLED panels tend to have slightly inferior color performance to QD-OLED, and the XG32UCWG is no exception. However, its overall SDR performance is extremely good, and WOLED’s color gamut coverage is getting closer to QD-OLED.
Matthew Smith / Foundry
First up is brightness. The Asus ROG Strix OLED XG32UCWG does well here with a maximum sustained SDR brightness of 286 nits. That’s the second-best result from an OLED, behind only the much more expensive Asus ProArt PA32UCDM.
You may need that brightness, however, due to the True Glossy Black panel finish. This finish is meant to enhance perceived contrast, but it’s already extremely reflective. Indeed, it’s virtually a mirror, as highly distinct full-color reflections are easy to make out even in moderately lit rooms. Because of that, I can only recommend the XG32UCWG in a room with very good light control.
Matthew Smith / Foundry
The True Glossy Black finish enhances perceived contrast but not necessarily measured contrast. That’s because OLED panels all hit an effectively infinite contrast ratio anyway due to their success providing a perfect black level of zero nits.
So, what does better perceived contrast mean in practice? It means that dark areas of the image have an incredibly inky, deep look. This is not because the pixels themselves are dimmer but, rather, because of how light scatters across the display.
I happened to review the Samsung Smart Monitor M9, an OLED panel with a matte finish, just before the XG32UCWG. The difference is stark. The XG32UCWG looks dramatically more contrast-rich and vivid, particularly when viewing high-contrast content. A dark alley in Cyberpunk 2077 is a good example.
However, as just mentioned, the XG32UCWG is highly reflective. The Smart Monitor M9 is not. Personally, I would rather have the Smart Monitor M9’s matte coat than the XG32UCWG’s glossy coat. This, however, is a matter of personal preference. Glossy OLED fans will love the XG32UCWG.
Matthew Smith / Foundry
The XG32UCWG’s color gamut results are interesting. It covered 100 percent of sRGB, 96 percent of DCI-P3, and 88 percent of AdobeRGB.
As the graph shows, this puts the XG32UCWG slightly behind the curve for an OLED monitor—as all of the recent 32-inch displays PCWorld has tested were QD-OLED panels. On the other hand, this color gamut is objectively solid, defeating most monitors that lack quantum dots.
I think the XG32UCWG’s color gamut is more than adequate for most situations, but if you really want the best color gamut possible, QD-OLED still has the edge.
Matthew Smith / Foundry
It’s a similar story in color accuracy. The measured average color error of 0.97 is technically towards the bottom of this heap of OLED monitors. However, a color error this low is excellent by any standard, and certainly more than good enough for gaming. It’s also worth mention, again, that the XG32UCWG has an unusually wide range of image quality adjustments for a gaming monitor, which means you can do more to calibrate and tune the monitor to your needs than with some competitors.
The monitor’s color temperature and gamma performance was average. I measured a default color temperature of 6600K which, though slightly above the target of 6500K, isn’t going to be noticeable in most cases. The gamma curve was a bit high too, at 2.3 when set to 2.2 (other gamma presets were also high). That means the image looks a bit darker than it should. I do find this to be slightly noticeable compared to a spot-on IPS-LCD display, but most OLED monitors have the same quirk.
Sharpness is a perk, of course, as the monitor’s maximum resolution of 3840×2160 works out to about 140 pixels per inch across the 31.5-inch display. That is identical to other OLED monitors, so there’s no major advantage here. The high resolution, along with improvements to OLED panel technology, largely banish the sharpness issues of earlier panels. It looks tack-sharp though, of course, no more so than the competition.
The Asus ROG Strix OLED XG32UCWG’s overall SDR image quality is solid, though not exceptional for an OLED monitor. It scores better than most in brightness, though also gives up some ground in color gamut. Contrast is exceptional and perceived contrast is enhanced by the highly glossy display coat, though at the cost of annoying reflections in even moderately lit rooms. Gamers who can get over the highly glossy finish, or prefer it, will find the monitor’s SDR image quality is top-notch.
Asus ROG Strix OLED XG32UCWG HDR image quality
Asus backs up the ROG Strix OLED XG32UCWG’s healthy SDR performance with HDR performance that, while not class-leading, is certainly strong and among the better reasons to buy the monitor.
Matthew Smith / Foundry
As the graph shows, the XG32UCWG delivered great HDR brightness across the board. Its HDR brightness maximum was about 807 nits in a 3 percent window, meaning 3 percent of the screen was displaying a bright white HDR image. That’s a solid result.
Subjectively, HDR content looked excellent. The monitor has the brightness, contrast, and color performance required to deliver superb results. Highlights, like explosions in games, were remarkably bright and vivid.
Asus also benefits from providing a good range of HDR adjustments. You can adjust the brightness or turn on the dynamic brightness mode to boost maximum brightness (this mode was used for testing). While these will technically reduce the accuracy of the image, I find they’re almost essential for PC monitors. Most HDR content is mastered on the assumption it will be viewed on a large display with the viewer many feet away, which is not the typical use case for a monitor.
Asus ROG Strix OLED XG32UCWG motion performance
The Asus ROG Strix OLED XG32UCWG is a dual-mode monitor. That means it’s designed to display 4K resolution at up to 165Hz, or 1080p at up to 330Hz. Whether that matters will depend on your priorities.
Personally, I would not play at 1080p to enjoy 330Hz, even though I can notice the improved motion clarity at 330Hz. The reduced sharpness of 1080p on a 32-inch display is just too much. However, highly competitive gamers will likely appreciate the added smoothness and motion clarity of the 1080p/330Hz mode.
The real key is the versatility this can provide. Traditionally, competitive gamers had to opt for lower resolutions to gain high refresh rates. That’s fine in Counter-Strike 2 but less so if a competitive gamer wants to boot up Cyberpunk 2077 in their down-time. The XG32UCWG offers the best of both worlds.
It’s not without sacrifice, however. At this price, you could buy a 4K/240Hz monitor instead. So, you must decide: 4K at 240Hz all the time, or the option to flip between 4K/165Hz and 1080p/330Hz? I would always opt for the first option, but I can see why some would prefer the latter.
Whichever you’d prefer, the XG32UCWG’s motion clarity is excellent. OLED monitors have low pixel response times, which reduces blur and makes the most of their high refresh rates. The XG32UCWG also provides official support for AMD FreeSync Premium Pro and Nvidia G-Sync for smooth frame pacing alongside AMD and Nvidia video cards.
The XG32UCWG also supports Extreme Low Motion Blur. This is Asus’ name for a backlight strobing feature that inserts black frames between standard frames. Due to quirks of human persistence of vision, this has the effect of reducing perceived motion blur. ELMB reduces brightness, is only in certain image modes (including a refresh rate up to 165Hz), and can cause a “double image” effect. But, on the plus side, it’s successful in noticeably increasing motion clarity. The XG32UCWG also mitigates some ELMB downsides. It has a bright panel for an OLED, so reduced brightness with ELMB on is less of a concern, and ELMB’s double image effect is less apparent than some other backlight strobing schemes I’ve witnessed.
Overall, the XG32UCWG represents the leading edge of motion clarity and responsiveness in a 32-inch gaming display. The 1080p/330Hz mode is extremely crisp, and the 4K/165Hz isn’t bad, either. I think that, for many, the buying decision will come down to motion clarity. If 1080p/330Hz and ELMB sound rad, the XG32UCWG is a solid choice. If not, a 4K/240Hz QD-OLED is probably the way to go.
Should you buy the Asus ROG Strix OLED XG32UCWG?
The Asus ROG Strix OLED XG32UCWG is a strong contender in the highly competitive battle between 32-inch 4K OLED monitors. Its perks include solid connectivity, a contrast-rich panel, good SDR and HDR performance, and support for dual-mode functionality at 4K/165Hz or 1080p/330Hz.
On the downside, the panel’s extremely glossy surface will prove divisive, its color performance doesn’t quite match QD-OLED, and the monitor is priced to compete with monitors that can provide 4K/240Hz. That last point stings most, in my opinion. If it were my money, I’d opt for the MSI MPG 32URXW. Or, at least, I would if it was in stock at MSRP (it’s currently not).
Speaking of MSRP, it’s worth mention that the XG32UCWG is not too expensive. It carries an MSRP of $999, but Asus says it will be $899 for an “initial period” at launch. That’s very competitive, and the monitor is worth a spot on any 32-inch 4K OLED short list for as long as it stays at or near that price.
The Asus ROG Strix OLED XG32UCWG will appeal most to hardcore gamers who really care about motion clarity, as they’ll see the benefit of the 1080p/330Hz mode. At the same time, competitive gamers can still choose 4K resolution when playing more graphically demanding and immersive titles. Read...Newslink ©2025 to PC World |  |
|  | | RadioNZ - 19 Sep (RadioNZ) A group of South Taranaki rangatahi are reconnecting with their culture in a first-of-its kind collaboration with the Western Institute of Technology. Read...Newslink ©2025 to RadioNZ |  |
|  | | PC World - 19 Sep (PC World)If you’re wondering what effect Intel’s blockbuster deal with Nvidia will have on its existing product roadmaps, Intel has one message for you: it won’t.
“We’re not discussing specific roadmaps at this time, but the collaboration is complementary to Intel’s roadmap and Intel will continue to have GPU product offerings,” an Intel spokesman told my colleague, Brad Chacos, earlier today. I heard similar messaging from other Intel representatives.
Nvidia’s $5 billion investment in Intel, as well as Nvidia’s plans to supply RTX graphics chiplets to Intel for use in Intel’s CPUs, have two major potential effects: first, it could rewrite Intel’s mobile roadmap for laptop chips, because of the additional capabilities provided by those RTX chiplets. Second, the move threatens Intel’s ongoing development of its Arc graphics cores, including standalone discrete GPUs as well as integrated chips.
We’re still not convinced that Arc’s future will be left unscathed, in part because Intel’s claim that it will “continue” to have GPU product offerings sounds a bit wishy-washy. But Intel sounds much more definitive on the former point, in that the mobile roadmap that you’re familiar with will remain in place.
So far, Intel’s public roadmap calls for Intel’s “Panther Lake” processor to debut this fall, probably shipping in early 2026. Intel’s been talking about that chip for months and months, and there’s no reason to believe those plans will change. Intel has also publicly disclosed Nova Lake, the next-next-generation mobile processor for laptops, which is also due in late 2026 and will probably enter laptops in early 2027. According to a leaked roadmap from a Spanish PC maker, Wildcat Lake might be a 2026 part, too.
What we’ve been told, however, implies that any work that comes out of the Nvidia-Intel partnership will be additive. Essentially, there will be additional products that will be added to the roadmap: premium products, attached to markets like consumer, gaming, creator, and business.
To me, that sounds like Intel could be adding a premium version or option to its established lines. Remember, we don’t know what Panther Lake or Nova Lake will be designed as. We do know, however, that Meteor Lake, the first-generation Core Ultra chip, was designed with a specific GPU tile. One might imagine that Intel could ship a processor with a GPU tile that could either be Intel’s own Arc chip, or a replacement architected by Nvidia. Whether that would be possible or not is unknown — that’s just speculation.
Though Intel and Nvidia have been working on this partnership for about a year, according to Nvidia chief executive Jensen Huang, we also don’t expect to get that much information about future products anytime soon. Though, with a rabid technology press corps eager to follow up on the question financial reporters didn’t ask during the Nvidia-Intel press conference, who knows what will emerge? Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)Google is tying Gemini and Chrome closer together, allowing Gemini broader access to your Chrome tabs while quietly turning the address bar into an entry point for its AI Mode. Eventually, it’ll add agentic browsing to Chrome as well.
The latter point is likely what Google wants to signal to the broader market, since keeping pace with (or surpassing) other browser makers deploying agentic AI is seen as a leadership move. But agentic browsing will debut in the coming months, while Gemini’s tighter integration with Chrome arrives soon, even today.
A day ago, you could type “best laptops” in Microsoft Edge on your PC and receive a summary of Copilot’s findings above a list of links. In Chrome, provided that you didn’t have AI Mode enabled, Google would return just that list of search results. Later this month, Google is making the “omnibox” a repository for AI Mode: what was once the “address bar” over time became a search box, and now it’s being transformed into something more.
Essentially, Google seems a step away from merging Gemini and Search within Chrome, as it said in May–it just hasn’t quite gotten there yet. Then, users were able to try out Gemini in Chrome with a paid subscription, allowing them to see the Gemini “sparkle” hovering above the content on the page. Today, Google has removed that paid limitation.
Now that Gemini is being added to Chrome, users will be able to “ask” queries about the page via a sidebar. If you “search” via the omnibox, you may receive an “AI Mode” result. AI Mode in the omnibox will roll out later this month. Users can toggle between a traditional search and an AI Mode request by clicking the small AI Mode “chip”.
It’s still unclear why Google has both an “omnibox” for web pages and search queries, plus a separate “box” dedicated to Gemini queries.
Google
Gemini in Chrome, however, can now draw on more of your own information for context: not just what’s on the page, but also your other open Chrome tabs, your browser history, and even Google apps like Gmail and Calendar.
Querying your browser history sounds a bit like Microsoft Recall, but without the fanfare or controversy. Here, Google suggests that you use a prompt like “what was the website that I saw the walnut desk on last week?” or “what was that blog I read on back to school shopping?”
Here, Gemini within Chrome is not only summarizing the YouTube video but also creating an event.Google
If you have a question about a page you’re viewing, Google will supply some suggestions.
Google’s road to agentic browsing
Like Microsoft in Edge and Opera with its own version, Google has also demonstrated agentic browsing. However, it looks like every other agentic browser demo I’ve seen: give it a task and off goes the AI agents to complete it. In this case, Google already offered a sneak peek of agentic technology in May as “Project Mariner.”
When the process (in this case, a shopping task) completes, you’re given a chance to look it over and then make a final decision to pay, or not.
The idea is that agentic AI could be used to plan trips, handle shopping, or even combine the two. In this case, however, Google’s agentic AI will be limited to English-language web pages, according to company executives. The feature will roll out “in the coming months,” Google said.
Google
AI-based security, too
Google’s AI is being applied to personal web security. It’s not just detecting scams that might trick you into downloading software, it’s also blocking sites that push fake contests or sweepstakes, and even cutting down on low-quality sites that request unusual permissions, like camera access.
Google also said that it will use AI to detect compromised passwords that were leaked in a data breach. Today, it simply alerts you and points you to the site to change them. On certain sites, Google says it can now reset and securely store the updated password for you.
There are some smart uses of AI here, but it also feels like Google is slowly easing us into a future where AI answers our queries instead of showing a list of links. What will that make Chrome, which began life as a web browser and now is evolving into a showcase for Google’s AI? The jury’s still out on that one —well, not legally. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)At a glanceExpert`s Rating
Pros
$300 MSRP seems reasonable
Thunderbolt 5
Three-display capability, or two displays plus an SSD
Thunderbolt Share is included
Stable
Cons
You’ll probablly need to buy display adapter cables
No active cooling, but it didn’t seem to need it
Our Verdict
Plugable’s 11-in-1 Thunderbolt 5 (TBT-UDT3) docking station is a solid all-around TB5 dock with a great mix of features and ports. Pair it with a TB5 SSD and you’ve got impressive storage performance.
Price When Reviewed
This value will show the geolocated pricing text for product undefined
Best Pricing Today
Best Prices Today: Plugable 11-in-1 Thunderbolt 5 Docking Station
Retailer
Price
$299.95
View Deal
Plugable
$299.95
View Deal
Price comparison from over 24,000 stores worldwide
Product
Price
Price comparison from Backmarket
Best Prices Today: Check today’s prices
Plugable’s 11-in-1 Thunderbolt 5 (TBT-UDT3) docking station is a moderately priced Thunderbolt 5 dock that can future-proof your PC for years to come. While it might not offer the dedicated display ports of older docks, its integrated Thunderbolt Share delivers file sharing and a KVM-like experience, for free.
What Plugable doesn’t offer is integrated storage or active cooling, saving your wallet some additional cash. Just keep in mind that you may have to make up for that by buying some additional display cables.
If you’re interested in future-proofing your PC, the combination of the high-speed display options Thunderbolt 5 offers, plus Thunderbolt Share, and the additional performance a high-speed external TB5 SSD offers makes this dock really intriguing.
Plugable TBT-UDT3: Design and build
Plugable calls this dock the Plugable Thunderbolt 5 Dock with 3x Thunderbolt 5 ports, 140W Laptop Charging, or TBT-UDT3. It’s a rather compact Thunderbolt 5 docking station, measuring 6.9 x 1.6 x 3.1 inches. I’ve always been somewhat partial to docks which utilize vertical space, such as the HP Thunderbolt G4 Dock, simply because my desk doesn’t have that much space on it. Plugable’s dock can fit into a vertical stand included in the package. This maximizes your available desk space even more.
The TBT-UDT3 is made of aluminum and ABS plastic. These two materials weave their way in and out of the chassis. You’ll find metal coating the top and bottom (if mounted flat, not vertically) which feels necessary. The dock was fairly warm inside my air-conditioned office, using two 4K displays connected via the dock’s included Thunderbolt 5 cable. (That cable measures 39 inches, or 1 meter long.) Some TB5 docks include active cooling, with an external fan; the TBT-UDT3 does not. That’s possibly a corner Plugable cut, but it doesn’t seem to have affected its stability at all.
The rear of Plugable’s 11-in-1 Thunderbolt 5 (TBT-UDT3) docking station, with two downstream Thunderbolt 5 ports, and upstream TB5 port to your computer, plus 2.5Gbps Ethernet, USB-A, and locking slots.Mark Hachman / Foundry
I usually refer to docking stations without dedicated display ports as hubs, not docks. In this case, Plugable’s TBT-UDT3 includes two Thunderbolt 5 ports on the rear of the dock, and one in front. All three can be used for display connections.
Make sure you choose the proper cable for the job. A Thunderbolt 4 dock at 60Hz can use a USB-C to HDMI adapter that supports 4K60 output. That works fine with this Thunderbolt 5 dock, too. But a TB5 dock (like this one) should output a 4K display at 144Hz per port–you’ll need a slightly more expensive cable (about $25 apiece).
Specifically, Plugable’s dock puts its power button on front, lit by a bright white LED which, accidentally or not, leaks into the internal ports, giving them a faint glow and making them easier to insert connections inside a dim room. The front of the dock also includes both a microSD and SD card slot at 312MB/S UHS-II speeds, a 10Gbps USB-A port, a Thunderbolt 5 port, plus a 3.5mm headphone jack. On the rear are two more Thunderbolt 5 ports, both a 5Gbps and 10Gbps USB-A port, the downstream TB5 connection to the PC, a 2.5Gbps Ethernet jack, and two lock ports.
The front of Plugable’s 11-in-1 Thunderbolt 5 (TBT-UDT3) docking station is simpler, with just an external TB5 port, plus SD/MicroSD and a headphone jack.Mark Hachman / Foundry
Plugable’s dock doesn’t really offer charging capabilities — one rear USB-A port supplies 7.5W — but you can certainly plug in a phone to an unused USB-C/Thunderbolt 5 port, which is rated for 15W of power for a phone or an external device of some sort. It actually provided 13.9W under load. That used to be enough to fast-charge a smartphone like a Samsung Galaxy S19, but can’t really keep up with the high-speed charging power used by recent iPhones or Android phones, which fast-charge at 45W or higher.
Keep in mind that the full charging capabilities of Thunderbolt 5 go up to 240W. This dock taps out at 140W. On the other hand, Thunderbolt 5 is (for now) confined to gaming laptops, and those laptops generally pull close to 400W or more while gaming under load. Put another way, even 240W isn’t going to cut it for gaming, right now. That power supply might be suitable for tomorrow’s content-creation/light gaming notebook, but not now.
Plugable’s 11-in-1 Thunderbolt 5 (TBT-UDT3) docking station, deep into testing. Your desk will hopefully be more organized than this is. Note how the vertical orientation saves space, however.Mark Hachman / Foundry
Plugable’s dock also includes an unexpected bonus: Thunderbolt Share, a technology that came and went without a lot of fanfare from the mobile community. Using Thunderbolt Share, two PCs can share files over a Thunderbolt connection, or two PCs can share a single screen. However, Thunderbolt Share requires that the PCs download the Thunderbolt Share app. One of the devices must also have a Thunderbolt Share license — or, in this case, the dock. Only then is Thunderbolt Share allowed to work. (You can see our video demonstration of Thunderbolt Share, here.)
Plugable TBT-UDT3: Performance
For these tests, I used Razer’s Blade 18, which includes a Thunderbolt 5 port as well as a separate Thunderbolt 4 connection. On that laptop, Plugable’s dock seemed almost perfectly stable. It connected to a pair of 4K160 displays at the dock’s rated speed of 144Hz, my test bed’s default configuration.
It also connected perfectly to a third 4K160 display at 144Hz, too, as it should. In this scenario, however, my test laptop’s display wouldn’t light up until I rebooted. After it did so, all three external displays lit up at 144Hz, plus the laptop’s display. The laptop/dock combination couldn’t handle streaming 4K60 video to all displays, but static Web pages loaded with no issue.
I also connected it to my daily laptop, with a Thunderbolt 4 port, and I had no issues using Plugable’s same dock with the same 4K displays at 60 Hz.
When I accidentally powered off the dock when trying to insert it into its vertical stand, there was a bit of “panic,” where the displays cycled through and flipped on and off for a few seconds. That was user error, however, and the dock and the connected displays worked quite well thereafter. While the dock had some issues bringing up the displays when connected to an older TB4 laptop that was resuming from sleep, that problem did not manifest on the Blade 18 and its TB5 port.
I’ve been testing Thunderbolt docks for several years using a standardized methodology. Thunderbolt 5, however, requires an update to my test procedures.
I stream 4K video at 60Hz across two displays, then three — Plugable’s dock handled it like a champ without dropping more than a handful of frames. Streaming data from an attached SSD, though, is a bit more challenging with a higher-bandwidth Thunderbolt 5 dock. OWC kindly provided us with an Envoy Ultra SSD, rated at Thunderbolt 5 speeds. That puts more pressure on the dock itself to keep up.
To date, there just haven’t been that many Thunderbolt 5 docks available. Most of my reviews cover Thunderbolt 3 and 4, so this dock, along with the Sonnet Echo 13 Thunderbolt 5 Dock, represent a small cadre of the fastest docks available. As it happens, the performance of the two is roughly comparable.
I ran PCMark’s storage test against the Envoy Ultra, both directly connected and also connected to a Thunderbolt 5 port on the dock. Directly connected, the Envoy Ultra returned a bandwidth score of 469MB/s or a score of 3,202. Connected to the dock’s TB5 port, performance dropped to 437MB/s or a score of 2,920. That’s a 7 percent drop, and basically identical to the 436MB/s bandwidth score that the Sonnet Echo 13 yielded when the Envoy Ultra was connected to its Thunderbolt 5 port.
While streaming the two 4K videos across the integrated Ethernet port, performance dropped to 402.77 MB/s, since some of the bandwidth was taken up by the Ethernet port.
Only my folder copy test, which measures how long it takes to copy a bundle of files from an SSD through the dock to the desktop, showed any real difference from the Sonnet: 13.9 seconds for the Sonnet, and 16.9 seconds for the Plugable dock, or 14.2 seconds vs 18.96 seconds while streaming.
I tested Thunderbolt Share, which crashed the first time I tried it. (I didn’t notice that OneDrive was syncing in the background.) Using one laptop to control another worked fine. I was able to transfer my folder of files in about 54 seconds, slightly faster than my Thunderbolt 4 dock tests. That’s reasonable, given that both a Thunderbolt 4 laptop and a Thunderbolt 5 laptop were connected to the TBT-UDT3.
Mark Hachman / Foundry
It might be worth noting that that the rival Sonnet Echo 13 includes a 2TB integrated SSD and costs $439 at press time; Plugable’s dock does not include an SSD, and OWC’s Envoy Ultra (2TB) costs about $300 alone at press time. On the other hand, the internal bandwidth of the Sonnet’s internal SSD was 279.84MB/s, substantially less than the Plugable + OWC SSD combination.
The bandwidth of Sonnet’s internal SSD is similar to what you might expect of external gaming SSDs. On the other hand, the read and write speeds of the OWC Envoy Ultra plus the Thunderbolt 5 connection push upwards into the speeds of a good internal PCIe4 SSD, and that’s worth something, too.
The one thing I didn’t test is how well this dock accommodates an external GPU. That’s a capability that’s built in (again) to Thunderbolt 5, but I don’t think an eGPU makes a compelling argument yet if TB5 ports are only found within gaming PCs already equipped with discrete GPUs.
Plugable TBT-UDT3: Conclusion
I would, yes. Generally I hope for premium Thunderbolt docks to be in the $250 range or a little lower, and $299 seems pretty reasonable for a premium dock — though you may have to add display adapter cables to that price. A two-year warranty is included. Though this dock does offer access to three displays, you might find that connecting two displays plus a high-speed SSD works best for you. Interestingly, all of the Thunderbolt 5 docks I’ve seen do not add dedicated display ports, as their TB4 and TB3 offerings did.
Don’t forget about Thunderbolt Share, either. It’s not a technology you might use often; after all, you can always connect a hard drive to the dock, copy a file to the drive, replace the laptop with another, and download the file. Still, it’s an interesting twist that most docks don’t offer.
If you’re interested in future-proofing your PC, though, the combination of the high-speed display options Thunderbolt 5 offers, plus Thunderbolt Share, and the additional performance a high-speed external TB5 SSD offers makes this dock really intriguing. I really liked the flexibility the SSD inside the Sonnet Echo 13 offered, but Plugable offers an alternative with a different but very viable perspective. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)Adjustable actuation is what lets you choose a custom “trigger” point for each key, and it’s the buzziest feature in gaming keyboards. Logitech is hopping on the train now, even if it had to do a lot of jogging to catch up. The newest version of the G515 keyboard, now with magnetic switches, is christened the Rapid TKL—and it’s shipping now for $170. Ouch.
The G515 Rapid TKL looks a whole lot like the existing G515 designs, with a distinctive low-profile tenkeyless layout that’s just 22mm tall. Each one of those “analog” switches can be adjusted at 0.1mm increments in the travel (down to 2.5mm total), including “rapid trigger” capabilities, a feature that many competitive gamers want. (Razer calls it “snap tap,” if you’ve heard that one.) Two different functions can be bound to each key at different actuation points.
Logitech
Other features include keycaps made out of premium PBT keycaps (“Nice,” says my inner keyboard nut), a steel top plate, and a complete lack of wireless. That might feel like a step backwards for a top-of-the-line design, but adjustable actuation switches can drink down a lot of battery power. Even so, it’s an omission that hurts with a keyboard this expensive.
Logitech
Logitech also announced new gaming mice. The Pro X2 Superstrike features a “haptic inductive trigger system,” which Logitech says is a combination of adjustable actuation in the mouse switch and rapid trigger capabilities. That’s a lot of buzzwords, but the technology underneath uses copper coils to generate an electromagnetic field with a 0.6mm switch that can be user-adjusted at 10 different points, including rapid trigger reset.
Logitech
What does that mean for PC gamers? In addition to adjustable actuation points on a mouse—which seems like something that’s almost impossible to feel to me, but I haven’t gotten this thing in hand—Logitech says that the mouse can reduce click latency by “9 to 30ms.”
Other highlights for the wireless, shooter-style mouse include Logitech’s 44,000 DPI Hero 2 sensor, 90 hours of battery life, and 8,000Hz polling. The mouse is 60 grams, impressively light for all that tech. Be prepared to pay for it. The Pro X2 Superstrike mouse will cost a hefty $180 when it launches in the first quarter of 2026. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)Mark Zuckerberg recently unveiled the Meta Ray-Ban Display and Neural Band, which are smart glasses with a color high-resolution display and wristband controls. It sounds like science fiction, but it’s more of an inconspicuous integration of tech into everyday life.
Ray-Ban Display and Neural Band
The glasses look like classic Ray-Bans, but they have a discreet display on the side of the right lens. It supposedly doesn’t get in the way when looking through the glasses and only appears when needed to display messages, photos, translations, or AI responses.
Meta / Ray-Ban
The smart display’s resolution is 600×600 pixels with a 20-degree field of view and 42 pixels per degree—sharp enough for everyday use by everyday people. The display’s brightness adjusts from 30 to 5,000 nits and the refresh rate goes up to 90Hz, according to Meta.
A new feature here is control via the supplied Meta Neural Band. This wrist-worn device recognizes muscle movements on the wrist and converts them into commands—a swipe of the thumb is enough.
Meta / Ray-Ban
Based on four years of research with 200,000 participants, the AI glasses should be intuitive to control. The glasses themselves have a battery life of up to 6 hours, and up to 30 hours with a charging case. The Vectran material is as strong as steel but flexible, and Meta also claims that the AI-powered glasses are waterproof with an IPX7 rating.
Other features include WhatsApp and Messenger integration, video calls, navigation, live subtitles, and music control. The 12MP camera films in 1440p at 30 FPS, while the internal 32GB storage can store up to 500 photos or 100 30-second videos.
Two open-ear speakers and five microphones ensure good audio and recordings. The AI glasses are compatible with iOS 15.2 and Android 10, and corrective lenses from -4 to 4 diopters are also possible. Meta offers the Ray-Ban Display in black and sand colors and with transition lenses, in two sizes: Standard and Large (144mm to 150mm width).
Pricing and availability
The price of the Meta Ray-Ban Display at launch will be $799 including the Neural Band, and the AI glasses will initially only be available to buy in the US starting September 30th, 2025. Europe is to follow in early 2026.
Meta sees these smart glasses as an intermediate step between camera glasses and holographic AR models—technology that you wear inconspicuously in day-to-day life instead of constantly staring at your smartphone. It remains to be seen whether Meta’s AI glasses will catch on, but neuro-control is an exciting development. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)Surprise! We woke up this morning to a blockbuster mashup between Intel and Nvidia. Team Green invested a cool $5 billion into Intel, and in exchange, the two companies will be co-creating consumer and data center x86 processors interwoven with Nvidia’s RTX graphics. Human sacrifice, dogs and cats living together… MASS HYSTERIA!
It’s simultaneously a shocking shakeup of the PC chip triumvirate (AMD must be fuming), a much-needed lifeline for struggling Intel, and a recipe for a potentially exciting future – the world’s foremost graphics pioneer joining forces with the company formerly known as Chipzilla. Imagine the possibilities!
But I also have to ask myself at the same time: What does this mean for the future of Arc, Intel’s own in-house graphics project?
Intel Arc’s short history shows promise…
Arc is still in its infancy. Intel famously canceled its early “Larrabee” graphics architecture in the 2000s, which became a liability after the rise of Bitcoin and AI demonstrated the powerful potential of GPUs. Intel realized it missed the boat and rushed – slowly, at times – to orchestrate both the Arc brand and the Xe graphics architecture girding it.
The first Arc graphics cards launched in just October 2022, delivering great value for its price despite an onslaught of annoying bugs. Intel diligently fixed those bugs over time, and by the time the second-gen Arc B580 launched in late 2024, we called it “the first worthy budget GPU of the decade.” And Arc’s underlying Xe graphics architecture now powers the integrated graphics in Intel’s CPUs too, bringing a notable spike in laptop gaming performance.
…but potentially shaky foundations
Software bugs aren’t the only problem that rears its head when you’re trying to break into a new field where Nvidia and AMD have a decades-long lead. Intel’s GPU hardware prowess isn’t up to par with its rivals yet either; this shows in the size of the discrete Arc GPU die sizes. Bigger dies are much more expensive to make. The $250 Arc B580’s die size is a relatively massive 292 mm. By contrast, Nvidia’s RTX 4060 was around 150 mm, while the RTX 5060 is around 181 mm.
That matters. Intel’s Tom Peterson (a frequent guest on our Full Nerd podcast!) admitted last year that the Arc B580 is a “loss leader” – a product that costs the manufacturer money to sell, in the hopes of attracting customers. Intel figured it was worth eating that loss to build for a stronger GPU future.
Intel’s partnership with Nvidia suddenly throws that into question, even though the company says no major changes are currently planned. “We’re not discussing specific roadmaps at this time, but the collaboration is complementary to Intel’s roadmap and Intel will continue to have GPU product offerings,” an Intel spokesperson told me.
Intel needs strong GPUs to battle Nvidia in the data center, because AI is where the real money is. The consumer Arc cards are stepping stones to that goal. Now Nvidia is investing $5 billion into Intel – roughly a 5 percent stake, if the recent government investment is any indicator — to integrate RTX graphics into at least some Intel consumer CPUs, and to create data center solutions that interweave Intel’s x86 chips and Nvidia’s class-leading graphics.
If a major investor is bringing GPU technology to Intel’s chips, spanning from consumer to enterprise, and gifting Intel a lifeline in the data center where AMD has been eating Intel’s lunch. Is Arc worth investing in separately anymore?
Intel Arc’s certain yet uncertain future
From a strategic standpoint, there’s certainly a case to keep Arc around. What if the Nvidia relationship suddenly goes sour despite the big money? The company is known to be a ruthless partner. Keeping Arc and Xe in motion protects against a potential future where Nvidia pulls the rug out from underneath Intel, especially since Xe (and seemingly this Nvidia partnership) touches everything from laptops to data center GPUs. Continued investment into internal GPUs makes so much sense for Intel’s future.
But I’m not sure that’s what’s going to happen. Bright futures have a way of bumping into ugly realities.
Part of the reason this Intel-Nvidia mashup even happened is because Intel lost its manufacturing lead and has been hemorrhaging cash (and CEOs) ever since. Nvidia’s deal follows in the footsteps of the U.S. government taking a 10 percent stake in the company to help it stay afloat, and Intel selling off subsidiaries like RealSense cameras and Altera’s FPGA chips.
Christian Wiediger / Unsplash
Intel is scrambling to stay relevant, and Nvidia’s partnership is a major shot in the arm – not least by likely infusing Intel’s beleaguered 14A process, the current crown jewel of Intel’s foundry arm, with work from Nvidia and other companies inspired by Nvidia’s faith.
Either way, don’t expect major announcements from Intel (who I’ve asked for comment) any time in the near future.
“I don’t expect these platforms for 2-3 years,” Patrick Moorhead, an analyst who founded Moor Insights & Strategy and formerly served as an AMD executive, told me via direct message. “Both companies said there are no roadmap changes… on either side.
“Now… what will the demand be for Arc be once these are in market by customers remains to be seen.”
Xe and Arc have driven much-needed competition in the entry-level graphics card market this turbulent decade. I hope they manage to stick around. If not, Nvidia’s $5 billion investment could not only get the company a foothold in the x86 markets, but also drive a competitor out of the market. If that happens, that sky-high price tag will wind up looking like a downright bargain in the rearview mirror.
In the meantime, the Intel Arc B580 remains the best budget GPU of the decade. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 19 Sep (PC World)Nvidia is on top of the world right now, riding waves of investment in “AI” and becoming one of the most powerful and most profitable companies on the planet. Intel? Not so much. The company has been struggling in sales and performance for more than a year.
So color me shocked when Nvidia and Intel announced a joint venture this morning that could be huge for both.
Nvidia is investing $5 billion into Intel stock—a comparatively small slice for both companies—and approximately half of what the United States government invested in it after president Trump and other politicians demanded action on Intel’s CEO. But the bigger news coming from the press release is that Nvidia and Intel will partner on new chips for both data centers and consumers. The so-called “x86 RTX” chips will integrate Nvidia-designed graphics and AI chiplets into Intel CPUs.
“This historic collaboration tightly couples Nvidia’s AI and accelerated computing stack with Intel’s CPUs and the vast x86 ecosystem—a fusion of two world-class platforms,” said Nvidia CEO Jensen Huang.
Nvidia has been unassailable on the discrete graphics card front for years, now commanding over 90 percent of sales for desktop add-in boards in an effective monopoly, and dominating both sales and discussion for anything related to graphics, gaming, and “AI” industrial processing. Its longtime rival AMD has struggled to hold onto what market share it had, slipping even as it reportedly sells every GPU it can make. Things are better for AMD on the CPU side, where it’s gaining ground against Intel on the back of strong sales for laptop and desktop chips, especially its well-regarded X3D gaming series.
Intel has been trying to enter the discrete graphics market for the last three years. But despite impressive gains in performance right out of the gate, the Arc series of desktop graphics cards has made barely a blip, falling to nearly zero percent market share. Exactly what will happen to Arc if Intel starts to co-brand Nvidia integrated graphics isn’t quite known.
Nvidia’s meteoric rise to the top of both the chip market and the technology world has recently hit a highly visible snag, as reports indicate that the Chinese government is blocking purchases of its chips. The incredibly lucrative market was already showing some speed bumps as embargoes limit the performance of exported chips and China invests in domestic production to leverage its incredible energy and industrial infrastructure to compete with rivals like the United States and Taiwan.
Intel and Nvidia will hold a joint press conference this afternoon, broadcast live on the web. Read...Newslink ©2025 to PC World |  |
|  | | PC World - 18 Sep (PC World)“Click To Do” is Microsoft’s latest AI feature and selling point for Copilot+ PCs. Now that Windows Recall has taken a backseat after so much privacy criticism, Microsoft is turning the page—and this time it’s all about a special shortcut for accessing contextual AI actions.
To use Click To Do, you just hold down the Windows key on your keyboard and click once with the left mouse button. You can also press Windows key + Q if you’d rather use a keyboard shortcut. (If nothing happens, that means you aren’t on a Copilot+ PC.)
When Click To Do is activated, an outline will appear around your screen and Windows will highlight all text and images on your screen, making them selectable and allowing you to perform actions with them. Not only that, but Microsoft keeps adding more actions to this menu!
Click To Do started with Windows Recall
Funnily enough, Click To Do began its life as a feature built into Windows Recall. It let you take actions on text that appeared in the snapshots that Recall automatically took of your screen. However, after Microsoft pivoted from Recall, Click To Do became its own standalone feature.
As far as privacy goes, Click To Do feels like the anti-Recall: it doesn’t do anything in the background, and you must choose to use it.
Chris Hoffman / Foundry
When you activate Click To Do, it takes a screenshot and lets you interact with it. When you select actions like “Summarize Text,” all of it happens right on your PC with your PC’s neural processing unit.
But while most actions happen right on your PC, there are some actions—like “Search the web,” “Visual search with Bing,” and “Ask Copilot”—that will send data to Microsoft’s servers for processing. Fortunately, nothing leaves your PC unless you intentionally use such actions.
Click To Do lets you feed text to AI models
Click To Do uses optical character recognition (OCR) technology to scan your current screen and make text selectable. Basically, it’s taking a screenshot and letting you interact with elements within it.
For example, if you select an email address, the Click To Do menu presents “Send email” to compose an email. If you select a website URL, you can choose “Open website” to launch it in your default web browser. (Thankfully, this doesn’t always use Microsoft Edge!)
If you select over 10 words of text, things get more interesting. You’ll get a variety of actions that use the Phi Silica language model that runs on your Copilot+ PC’s NPU to perform actions like summarizing text, creating a bulleted list, or rewriting the text in different tones.
Chris Hoffman / Foundry
This is one of Microsoft’s first attempts at integrating NPU-powered text actions on a Copilot+ PC. Because it’s based on a screenshot, you can only send so much text to the language model at once.
That should improve the experience, because let’s be honest: those local language models that run on your Windows PC are nowhere near as powerful as a cloud-based large language models like ChatGPT. (If you’re using a chatbot app in your browser, you’ll wonder why you’re bothering to use a worse app that runs entirely on your PC. That’s a core problem with so many Copilot+ PC local AI features.)
Chris Hoffman / Foundry
There’s also an “Ask Copilot” action that will send your selected text to Microsoft’s Copilot AI chatbot, and a “Draft with Copilot in Word” action to start working on a Word document with Microsoft’s AI.
You can send information to a chatbot in the cloud here, too—but only if you’re using Microsoft’s Copilot (for home users) or Microsoft 365 Copilot (for businesses). Yeah, I know, it can be confusing.
Click To Do exposes AI image tools, too
Click To Do isn’t just for doing things with text—it also takes actions with images. This is where Microsoft’s other goals for Click To Do start to become clear. When you click on an image, you can select actions like “Blur background with Photos,” “Erase objects with Photos,” or “Remove background with Paint.”
Chris Hoffman / Foundry
These are all AI-powered image actions that would normally be scattered throughout Windows across different applications, but Microsoft is exposing them here through the Click To Do interface. And there’s also an “Ask Copilot” action here, so you can send an image to Microsoft’s Copilot AI and start a conversation about it there.
There’s still no shortcut for the impressive Super Resolution feature, though. That would be really useful!
You can disable Click To Do if you want
If you don’t want Click To Do for whatever reason—and let’s be honest, that “hold the Windows key and click” shortcut could easily get in the way when playing some PC games—you can turn it off.
Chris Hoffman / Foundry
To do so, head to Settings > Privacy & security > Click to Do. Flip the switch here to turn it off. (While Click To Do is activated by default, it’s only available on Copilot+ PCs, so you won’t see it on your average Windows 11 PC.)
What’s coming in the future?
Microsoft has been spending a lot of time adding feature after feature to Click To Do. For example, Click To Do will soon have an integrated Copilot prompt box where you can select text/images, type a prompt, and then click “Ask Copilot” to send the selected content to Copilot along with the prompt you typed.
You will also be able to select a mix of text and images, describe an image with an on-device large language model, and send text to the Microsoft Reaching Coach app. Click To Do will also soon be able to detect tables on your screen so you can send them right to Microsoft Excel.
This mix of features shows what Microsoft wants Click To Do to become: the “one click to access AI anywhere in Windows” action.
Microsoft hasn’t added file integration to Click To Do yet, but lots of actions for right-clicking files and sending them to Copilot and other AI tools are popping up in File Explorer’s context menu.
As for me? I’d rather copy and paste
After using so many half-baked Copilot+ PC features, it’s nice to see something with long-term potential. However, even though I use a Surface Laptop when I’m away from my powerful desktop PC, I must confess: I never actually use Click To Do, just like I never use Recall.
Click To Do is an interesting idea, but I still find myself copy-and-pasting text and images instead. I can copy-paste text and images into any app, with or without AI tools. And Windows 11’s built-in Snipping Tool is already great for extracting text from screenshots and capturing images for me to send to other apps or whatever else.
Maybe this will change in the future. Maybe not. But one thing’s for sure: Microsoft’s move away from Recall towards Click To Do is a smart one, and if it ends up paying off, it will pay off big.
Further reading: Should you buy a Copilot+ PC? What to know Read...Newslink ©2025 to PC World |  |
|  |  |
|
 |
 | Top Stories |

RUGBY
All Blacks playmaker Damian McKenzie is vowing to seize the opportunity to start at number 10 in the second Bledisloe Cup test in Perth More...
|

BUSINESS
AEC launches legal action against man accused of distributing unauthorised pamphlets attacking Allegra Spender More...
|

|

 | Today's News |

 | News Search |
|
 |