
Search results for 'Environment' - Page: 1
| | PC World - 25 minutes ago (PC World)Microsoft has not yet officially announced Windows 12, but leaks, internal project references, and statements from hardware partners are increasingly pointing to the imminent release of a new generation of Windows that goes far beyond a classic feature update for Windows 11.
Expected release window and upgrade cycle
A scenario is circulating within the industry involving early leaks and references, followed by possible insider previews, an official presentation and a broad release in the course of 2026.
This timeframe corresponds with the end of support for Windows 10 in October 2026 and the extended ESU period. A new Windows would fall exactly into this “forced” upgrade cycle and address both private users and businesses.
Windows 11 will continue to be supported and receive updates in parallel. A switch to Windows 12 would likely take place gradually.
By the way: if you are using Windows 11 Home, you are missing out on the many advantages of the Pro version. The Windows 11 upgrade is available in the PCWorld software store for a low price of $59 instead of $99.
StackCommerce
Hudson Valley Next and CorePC
The code name “Hudson Valley Next” is appearing internally and is considered the basis for Windows 12. At its core is a modular CorePC architecture. System components can be more strongly isolated from each other, updates are more granular, and editions can be scaled more specifically for different device categories, from tablets to high-performance PCs.
This structure allows for lighter variants for devices with lower performance, while at the same time providing more stable core areas and more flexible integration of cloud services. Hybrid models combining local and cloud-based processing form the technical basis for AI workloads.
Save 40% on a Windows Pro upgrade
Windows 11 Pro
AI as the foundation of the operating system
Windows 12 will not treat AI as an add-on feature, but will anchor it as a fundamental part of the system. Copilot is evolving from an optional assistant to a central control instance. OS-wide integration will replace selective AI functions.
Thomas Joos
Context-dependent task recommendations, real-time summaries, automatic content generation, intelligent document categorization, and semantic search are expected.
You create a content description while the system recognizes relevant files — regardless of the exact file name. Settings automatically adapt to usage patterns, and automation takes effect system-wide.
NPU requirement and minimum requirement of 40 TOPS
Several leaks point to a clear hardware requirement. Full functionality is said to require a dedicated NPU with at least 40 TOPS of computing power. Microsoft is thus explicitly positioning Windows 12 as an operating system for AI PCs and Copilot devices.
Intel and AMD are presenting processors with integrated AI acceleration. OEMs are labelling new systems as “Windows 12 Ready.” Devices without an NPU may not receive certain AI features or may be excluded from the full upgrade. This strategy supports the expectation of a new PC renewal cycle.
Sam Singleton
Radically redesigned
Visual leaks show a floating taskbar with rounded corners that visually detaches from the bottom of the screen. Transparent glass elements characterize the appearance. System indicators and the clock move to the upper-right corner. Centered at the top is a prominent search bar with direct Copilot integration.
This layout shifts the focus of interaction to search and AI. Window management, snap layouts, virtual desktops, and widgets respond more flexibly. The user interface adapts to hybrid usage scenarios and supports both desktop and touch operation equally.
Efficiency, performance, and memory management
Windows 12 is expected to offer improvements in power management and memory handling. The base system will be more focused on modern mobile processors, and AI-powered performance profiles could dynamically adjust resources. The goal is to use hardware more efficiently while offering expanded functionality.
Security and zero-trust integration
Deeper system isolation, modernized authentication procedures, and greater integration of cloud-based protection mechanisms are expected. Zero-trust concepts from the corporate environment will be incorporated more strongly into the platform. At the same time, there will be a focus on local AI processing to take data protection requirements into account.
MacPaw
Gaming, DirectStorage, and AI optimization
Windows is set to remain the central gaming platform. Windows 12 is expected to feature further DirectStorage optimizations, lower latencies in cloud gaming, and closer Xbox integration. AI-supported performance analysis could automatically adjust graphics options and evaluate gameplay. This reduces the amount of manual configuration required on your part.
Possible subscription strategy and Windows 365
Code fragments contain references to a “subscription status.” The discussion is not about a pure subscription operating system, but rather a premium version in the Windows 365 environment for consumers. This could provide additional cloud computing power and exclusive AI features for a monthly fee.
The classic Home Windows is likely to remain a one-time licence. Advanced AI services would be added as an option. The integration of cost-intensive cloud AI is seen as a possible driver for new revenue models.
Market strategy and PC supercycle
The combination of the end of support for Windows 10, NPU requirements, and the AI PC offensive is creating considerable market pressure. Manufacturers are already positioning new devices with a view to the next generation of Windows. Observers are talking about a possible PC supercycle triggered by AI hardware and new system requirements. At the same time, the question arises as to whether functional hardware without AI acceleration can continue to be used.
Unresolved issues such as price
It remains unclear whether Microsoft will actually use the name Windows 12 or choose an alternative designation. Exact system requirements, upgrade entitlements, and pricing models have not been confirmed — will only Windows 11 users get the new version for free, or will Windows 10 users also be able to upgrade for free? Or will Windows 12 be available to everyone for a fee? The long-term Windows-as-a-Service strategy in the consumer segment also remains unclear.
The only thing that is certain is that Microsoft has not yet officially announced Windows 12. All of the innovations mentioned are based on leaks, code references, and strategic trends relating to AI, modular architecture, cloud integration, and new hardware classes.
Related content
Every Microsoft Windows operating system, ranked
Microsoft just forked Windows
Strip out Windows 11’s bloatware, ads, and other grossness—for free Read...Newslink ©2026 to PC World |  |
|  | | | PC World - 2 hours ago (PC World)AI tools like Sora from OpenAI or Veo promise cinematic-quality videos at the touch of a button. That said, the results can sometimes look artificial or distorted. This usually isn’t a limitation of the model itself, it’s about how it’s used. In this guide, we’ll share five proven techniques to dramatically improve the quality of your AI-generated videos.
1. Describe the subject as specifically as possible
AI video models will usually fill in the gaps themselves, but that’s exactly the problem. That’s why you need to be crystal clear in your description. If you’re not specific, this will lead to to incorrect backgrounds, distorted objects, or unwanted details. Instead of a general description like “Create a 10 second clip of a cat playing,” you should be more detailed with the following:
Appearance of the subject
Environment and lighting
Action and mood
In sticking with the cat example, you could write:
“A small, short-haired brown domestic cat with white paws plays with a stuffed animal in the shape of a squirrel. The scene takes place in a bright living room of a detached house, with warm daylight coming in through a window on the left. The floor is made of light wood, and a sofa can be seen blurred in the background. The cat nudges the toy with its paw, jumps back briefly, and then watches it curiously. The mood is calm, playful, and natural, the camera remains at the cat’s eye level and does not move.”
2. Use multiple runs
AI videos are not deterministic. This means that even with identical prompts, the results usually differ significantly. A failed video doesn’t automatically mean that the prompt was bad.
Experienced users deliberately create multiple versions of the same clip. Even small variations in movement, perspective, or timing can make the difference between unusable and surprisingly good.
The rule of thumb is simple: if five to ten runs don’t produce a convincing result, the problem doesn’t lie with the tool, it’s the prompt.
3. Keep scenes deliberately short and focused
Most AI video generators are designed to produce short, self-contained sequences lasting only a few seconds. If several actions, locations, or perspective changes are combined within a single clip, the likelihood of errors increases significantly: characters suddenly change their appearance, objects disappear, and movements often appear unnatural or jerky.
Prompts that describe a complete sequence are particularly problematic. Here’s an example:
“A person leaves their flat in the morning, walks through a busy street, enters a café, orders a coffee, sits down by the window, and looks out thoughtfully.”
Many AI models are still very unreliable when it comes to depicting such dramatic arcs. In the generated video below, numerous errors and inconsistencies appear right from the start, as the sequences appear out of order:
Sora/PC-Welt
A better description would be:
“A person is sitting in a small café at a window seat. Warm light falls in from the right. The person is drinking coffee and looking calmly out the window. The camera is static, slightly to the side at face level. The mood is calm and thoughtful.”
The video generated from this prompt is not perfect, but it’s better:
Sora/PC-Welt
4. Avoid text in the video
Text remains one of the biggest weaknesses of current AI video generators. While many models already achieve high visual quality in images and movements, they quickly reach their technical limits when it comes to displaying text: letters change their shape, words remain incomplete or appear as strings of characters that are difficult to decipher.
The main problems are longer texts, changing lettering, or content such as book pages, road signs, or packaging labels. The more text the AI has to display, the higher the probability of errors.
If text in the video is unavoidable, you should consciously reduce it and only use simple words or very short phrases.
5. Limit the number of objects in the image
AI video models struggle to display multiple people or objects at the same time. As the number of visible elements increases, the likelihood of errors rises significantly: faces change, bodies briefly merge, or objects appear unexpectedly and disappear.
Videos look much more stable when the action is separated in time or space. Instead of showing several people at once, focus on them one after the other. For example, the camera can pan from one person to the next, or clearly position a main character in the foreground while others remain outside the frame.
An example:
“Two people sit opposite each other, talking and gesturing, while other people walk by in the background.”
This prompt is more likely to result in distorted faces or unstable interactions. Here’s a much better example:
“One person is sitting at a table and talking. The camera initially shows only this person. Then the camera slowly pans to the second person sitting opposite. At no point are both people completely in focus at the same time.” Read...Newslink ©2026 to PC World |  |
|  | | | GeekZone - 1 Mar (GeekZone) Samsung audio innovation delivers original sound as intended and intelligent adaptive controls for every environment. Read...Newslink ©2026 to GeekZone |  |
|  | | | PC World - 28 Feb (PC World)Perplexity just launched Perplexity Computer, another agentic AI tool except this one acts as a kind of digital coworker. It can perform multi-step tasks on your behalf by employing several subordinate AI agents that work together to plan and deliver finished results.
For example, Perplexity Computer can create dashboards, apps, presentations, and other projects by dividing the work between different sub-agents. The tool employs several different AI models simultaneously, including Claude Opus for reasoning, Gemini for research, and other AI models for images, video, and faster subtasks.
Unlike OpenClaw, though, Perplexity Computer runs entirely in the cloud in a controlled environment, which reduces the risk of the AI affecting your local PC and files. The disadvantage is that it’s more limited compared to agentic AI tools that run directly on local hardware.
Perplexity Computer is currently only available on the Perplexity Max plan, which costs $200/month. Read...Newslink ©2026 to PC World |  |
|  | | | PC World - 28 Feb (PC World)At a glanceExpert`s Rating
Pros
Easy to assemble and move
Includes TizenOS with remote control
Good contrast ratio
Less expensive than alternatives
Cons
Short power cord, no built-in battery
Modest color gamut
Lackluster HDR and motion clarity
Our Verdict
The Samsung Movingstyle M7 is a mobile display with a smart TV operating system. It’s not perfect, but it delivers on its core features and undercuts the competition on price.
Price When Reviewed
This value will show the geolocated pricing text for product undefined
Best Pricing Today
Best Prices Today: Samsung Movingstyle M7
Retailer
Price
$699.99
View Deal
Price comparison from over 24,000 stores worldwide
Product
Price
Price comparison from Backmarket
Most computer monitors are meant to be used at a desk, but the Samsung Movingstyle M7 is a different breed. It ships with a heavy, wheeled base and pole stand that makes it possible to use nearly anywhere in your home, at least so long as a power outlet is nearby. The monitor also has Samsung’s smart TV operating system and a long list of standard features including Wi-Fi, a remote control, and built-in audio.
Read on to learn more, then see our roundup of the best monitors for comparison.
Samsung Movingstyle M7 (M70F) specs and features
Technically, the Samsung Movingstyle M7 isn’t really a monitor. It’s a bundle that includes the Samsung Smart Monitor M7 and the Movingstyle base. However, it’s not possible to buy the Movingstyle base alone.
Display size: 32-inch 16:9 aspect ratio
Native resolution: 3840×2160
Panel type: VA-LCD
Refresh rate: 60Hz
Adaptive sync: None
HDR: HDR10 compatible
Ports: 2x HDMI 2.0, 1x USB-C with DisplayPort and 65 watts of Power Delivery, 3x USB-A 2.0
Audio: 10-watt speaker system
Extra features: Remote control, TizenOS, wheeled stand, Wi-Fi 5, Bluetooth 5.2
Price: $699.99 MSRP
Samsung asks $699.99 for the Movingstyle M7, and it’s currently sold at that price online. That might seem expensive, as the Smart Monitor M7 that is bundled with the Movingstyle M7 retails for less than $250. However, the Movingstyle M7 is actually less expensive than competitive displays like the LG Smart Monitor Swing.
Samsung Movingstyle M7 unboxing and assembly
PCWorld monitor reviews don’t normally include a section dedicated to unboxing and assembly. With most monitors, the process is straightforward enough to skip. The Samsung Movingstyle M7, with its large floor stand, is a bit different.
The monitor arrived in a single outer box containing two inner boxes: one with the Samsung M7 monitor, which can be purchased separately, and one with the Movingstyle stand. Both were tightly packed and the stand’s base weighs nearly 40 pounds, so unpacking takes some effort. I managed to unpack it solo, but it would be best to have someone help.
Once everything is out of the box, assembly is straightforward, though it does require tools (which are included). First, the power cord is cabled through the tall pole stand, then the pole attaches to the base with screws and the monitor mount clamps onto the neck. The mount’s vertical position offers a good degree of adjustment.
Matthew Smith / Foundry
Finally, the monitor attaches to the mount with a VESA bracket, which screws to the back of the monitor. The bracket slides onto the mount and a final screw secures them tightly together.
Setup definitely takes some doing. It took me about half an hour from start to finish. However, aside from the tightly packed box, I don’t have any complaints. The assembly instructions were useful and the various pieces screwed or clamped together without issue.
Samsung Movingstyle M7 design
Once assembled, the Samsung Movingstyle M7 has a clean, if obviously unusual, aesthetic. It is basically a 32-inch monitor mounted to a pole, so it is not exactly subtle, though the white colorway and curved design help it blend into a typical home environment.
Matthew Smith / Foundry
Samsung pitches this monitor squarely at home users, though it would also work well in a conference room. It is envisioned as a mobile display that can move between a kitchen, a home office, or a guest room as needed. The wheeled stand provides that mobility, allowing the monitor to be positioned where it is needed and rolled aside when it is not. The wheels are small and the clearance on the base is slim, however, so the stand will only roll on flat surfaces like hardwood or tile.
One of my biggest concerns before I assembled the Movingstyle M7 was its stability. Fortunately, the stand holds up well in normal use. The base weighs almost 40 pounds, which is roughly four times the weight of the monitor itself, so an accidental bump or jostle is not going to send it to the floor. A determined shove can still tip it, though, so I would be cautious about using this monitor in a home with young children or a rambunctious dog.
You will also want to think about the power cord. The Movingstyle M7 doesn’t include a battery and so requires a connection to a power outlet. Samsung’s marketing materials show the monitor with a lengthy white cord, but my review unit shipped with a black cord roughly 10 feet long.
Though the wheels on the base make the monitor mobile, you won’t always need to move it for use, as the mount also adjusts for swivel, tilt, and height—though the height adjustment requires unclamping the mount from the pole, which is a bit finicky. The mount also supports 90 degrees of rotation into portrait orientation for those who want to use it that way.Curious readers might wonder if the Movingstyle stand can be used with other monitors. This is physically possible, as it uses a standard 100x100mm VESA mount, but the monitor’s documentation warns against it. I suspect that’s because the weight of the monitor attached has an impact on stability and Samsung doesn’t want to be liable for a too-heavy monitor tipping over. In any case, Samsung doesn’t sell the stand alone. It’s a complete package.
Though the wheels on the base make the monitor mobile, you won’t always need to move it for use, as the mount also adjusts for swivel, tilt, and height…
Samsung Movingstyle M7 connectivity
Connectivity is not the Samsung Movingstyle M7’s most important feature, and it shows. The monitor offers two HDMI 2.0 video inputs and a USB-C port that supports DisplayPort alternate mode. The absence of a standard DisplayPort input may frustrate users who want to connect a desktop PC, though that is probably a less common use case for a monitor like this.
The USB-C port also functions as an upstream data connection, linking to three downstream USB-A 2.0 ports.
The monitor also has Wi-Fi 5 and Bluetooth 5.2. The Wi-Fi connection allows the monitor to stream content directly from the internet without a connected PC, and the Bluetooth connection supports peripherals including game controllers. The monitor also supports AirPlay for wireless video from Apple devices.
Samsung Movingstyle M7 menus and features
The Samsung Movingstyle M7 is a full-fledged smart monitor running Samsung’s Tizen OS, the same operating system used by Samsung’s smart televisions. The monitor also ships with a wireless remote control. For all practical purposes, this makes it a 32-inch smart TV. It can stream content from all the major streaming apps, access Samsung’s own services, and run cloud gaming platforms without any external device connected.
Matthew Smith / Foundry
These features are always at least somewhat useful, but they are particularly useful here. The Movingstyle M7 can function as a fully independent display that needs nothing more than a power outlet and a Wi-Fi connection. I expect a lot of owners will purchase this monitor with no intention of ever connecting an external video source.
The Tizen experience is serviceable. As with other Samsung smart monitors I have reviewed, the operating system can feel sluggish when opening settings menus and navigating between options. The interface leans heavily on icons paired with labels that are sometimes truncated and lose meaning out of context.
Tizen is of course optimized for a television experience, so your opinion of it will depend on how much you intend to use the Movingstyle M7 as a TV versus a monitor. It’s frustrating if you only want to use the display as a monitor, as everything from setup (which requires Wi-Fi) to changing brightness takes longer than it should.
The included remote is essential. However, there’s a multidirectional joystick and a few buttons tucked around the rear center of the display that can serve as a backup if the remote goes missing or you need to make a quick adjustment. In practice, though, you will want the remote in hand for nearly everything.
Matthew Smith / Foundry
Samsung Movingstyle M7 audio
The Movingstyle M7 includes a 10-watt speaker system that delivers serviceable audio quality. Maximum volume is okay for a home office or guest bedroom, but it falls short in larger spaces like a living room or kitchen, especially if you are actually cooking and competing with background noise. Audio quality is clear but flat so more dynamic content like music and movies sound hollow.
This is normally the part of a monitor review where I recommend external speakers and mention that most monitors don’t have great speakers, if they have any at all. The catch here is that using external speakers with the Movingstyle M7 is more complicated than usual. Any speakers you pair with this display should ideally move with it, but the stand doesn’t have a mount for them, so you’ll need to give your audio setup some thought.
Samsung Movingstyle M7 SDR image quality
Samsung’s Movingstyle M7 in fact pairs the stand with the Samsung Smart Monitor M7, which can be purchased on its own (the stand, however, is only available as part of the Movingstyle M7). The Smart Monitor M7 is an affordable monitor with an MSRP of $400, which is often slashed to $250 or less. So, how does its image quality stack up?
Matthew Smith / Foundry
I measured a maximum SDR brightness of 329 nits which, as the graph shows, is a fine but middle-of-the-road value.
This level of brightness is much more than what’s required in most rooms. However, the Movingstyle M7’s mobility means it’s more likely to be used in a living room or kitchen with a lot of ambient light and no way to reduce it. In those situations, the monitor’s SDR brightness can prove merely adequate.
Matthew Smith / Foundry
Contrast is a win for the Movingstyle M7, as the Samsung Smart Monitor M7 has a Vertical Alignment (VA)-LCD panel. This type of panel can deliver lower levels of brightness in dark scenes, which improves overall contrast and provides a more immersive image.
Of course, the Movingstyle M7 won’t match an OLED display, which will look far more alluring and pack more detail into dark scenes. Still, the Movingstyle M7 performs well enough to provide enjoyable contrast in a wide range of movies and games.
Matthew Smith / Foundry
Unfortunately, the Movingstyle M7’s color gamut is a weakness. I measured a gamut that spanned only 97 percent of sRGB and 78 percent of DCI-P3. As the graph shows, this is a fairly narrow color gamut for a modern display and it’s where the Smart Monitor M7’s low price is most apparent.
The narrow color gamut is obvious in real-world use. Content looks unsaturated and lacks the impact it would have on a display with a wider color gamut. It’s passable, but it’s not going to impress viewers who are even moderately critical about image quality.
Matthew Smith / Foundry
Color accuracy is better, though the story has nuance. My testing found a very low color error across most colors, but a high color error (delta 6.3) in cyan. Subjectively, I thought the monitor lacked the ability to show much nuance in the blue-cyan range, causing colors in this range to seem particularly muted and unremarkable.The Movingstyle M7 results were once again solid in gamma and color temperature. I measured a gamma curve of 2.2, which is what I expect to see at default settings. I also measured a color temperature of 6700K, which is only a tad off the target of 6500K. That means the image looks a bit cooler than what’s ideal but is generally well-balanced.Sharpness is good, too, as the monitor delivers 3840×2160 resolution. If anything, the monitor tends to look a bit sharper than most 4K monitors in normal use. That’s because I typically viewed the monitor from further away than a desktop monitor. I was often at least 4 feet away from the Movingstyle M7 when viewing it. At that distance, a 32-inch display with 4K resolution looks remarkably crisp.
The Movingstyle M7’s overall SDR image quality is not remarkable but holds up well enough. It provides solid contrast and a well-balanced image with generally good color accuracy. However, the monitor’s color gamut and brightness could be better. The vast majority of monitors in the same price range will have better image quality but, of course, they also won’t have a mobile stand.
Samsung Movingstyle M7 HDR image quality
The Samsung Movingstyle M7 technically supports HDR, as it can accept an HDR10 signal. However, aside from the mention of HDR10 support in the monitor specifications, Samsung doesn’t mention HDR.
That’s for good reason. While an HDR10 signal can be viewed, it doesn’t look great due to the monitor’s limited brightness and color gamut. I wouldn’t say that HDR content looks better than SDR at all—just a bit different in terms of overall color presentation.In short, the Movingstyle M7 isn’t a good choice if HDR is at the top of your list of priorities.
Samsung Movingstyle M7 motion performance
I can say the same for the Movingstyle M7’s motion performance. The display has a maximum refresh rate of 60Hz and doesn’t offer adaptive sync, so motion fluidity isn’t great in modern PC or console games. I also noticed a lot of motion blur, which reduced motion clarity. To be clear, the motion performance looked fine for movies and shows. But if you’re interested in attaching a PlayStation, Xbox, or gaming PC, you’ll likely be disappointed.
Is the Samsung Movingstyle M7 worth it?
The Samsung Movingstyle M7 is a niche monitor that does what it was designed to do. It’s easy to assemble, moves across flat surfaces without trouble, and has a stand that makes the monitor usable almost anywhere you have open floor space.
At a glance the $700 MSRP might seem steep, but it’s not bad for this type of display. LG’s StandByMe and Smart Monitor Swing are both currently priced around $800 at retail. You might save money if you go the DIY route and choose a stand and monitor independently, but the end result is unlikely to look as attractive (and in some cases will be downright ugly).
If you want a large mobile display that you can position nearly anywhere the floor space is available, the Samsung Movingstyle M7 is a sensible choice. Read...Newslink ©2026 to PC World |  |
|  | | | PC World - 27 Feb (PC World)I’m a big fan of OLED monitors. (I recently upgraded to a 4K OLED for my desktop PC!) But I can’t help but feel I’m gaming on a stopgap.
OLED is incredible for screens, helping to boost contrast and color vibrancy. But just as plasma TVs were eventually trumped by LCDs, I get the sense that Mini-LED is fast catching up to OLED. In 2026, Mini-LED displays aren’t just brighter, but also often cheaper, with better longevity, no risk of burn-in, and improved color accuracy and vibrancy.
In short, OLED still has its strengths, but there’s less reason to pay for it now that its successor is almost ready for the prime time. It’s time to look beyond OLED to Mini-LED. Here’s why.
Mini-LED wins on brightness
I won’t deny that OLED monitors have gotten much brighter recently, and there are even some models that offer 1,500+ nits on smaller sections of the screen. But those tend to be WOLED monitors, which lose out on the color accuracy of QD-OLED.
By contrast, the latest Mini-LED monitors offer 2,000+ nits brightness with ease, often over larger sections of the screen.
For games, videos, or anything else that you really want to pop off the screen, there’s no beating the sheer vibrancy of Mini-LED. That’s doubly true if you’re looking at it in a brighter room. Sure, some love to game in the dark for the boosted contrast and color, but if you’re like me and mostly use your monitor during the day with the curtains open—or even with harsh overhead lights—then you’ll appreciate Mini-LED.
LG
In a brightly lit environment, Mini-LED’s extra brightness makes a far bigger difference than OLED’s contrast and colors. It makes it easier to see details, reduces problems with glare, and gives you a more consistent picture quality regardless of the time of day.
And then there’s HDR gaming on PC, which is far from smooth sailing even at the best of times. You’ll get a lot of benefits from Mini-LED’s higher brightness there, too. OLED might look better on high-contrast scenes with bright highlights, but the latest Mini-LED monitors have more dimming zones that make blooming far less of an issue.
OLED contrast isn’t so standout anymore
Yes, OLED still technically offers the best contrast, but it’s not as far ahead of the competition as it once was.
The best OLED monitors today are split between WOLED and QD-OLED displays. While the former enjoy the near-infinite contrast that OLED is so well-known for—inky blacks for days—QD-OLED panels are increasingly popular for better bright color support while sacrificing only a little bit of that contrast for the trouble.
TCL
On the other hand, Mini-LED has grown from strength to strength, and the latest models with 1,000+ local dimming zones don’t suffer from blooming as much as they used to. Yes, OLED contrast will always beat Mini-LED because it has that much better control over the individual pixels, but Mini-LED is getting very close.
And in the future, Micro-LED will eliminate the gap entirely.
Response time is overrated
Look, I’m not trying to annoy anyone, but I know this point is probably going to upset the semi-pro gamers out there.
If you’re scrambling for every possible competitive advantage in your favorite esports games, you should absolutely consider OLED. But I’m pretty sure 99% of you reading this don’t care that much about besting other hyper-competitive gamers. For those of us who just play games to relax or hang out with friends, Mini-LED is plenty fast enough.
Matthew Smith / Foundry
Sure, a 0.03ms response time is going to feel faster than 1ms… by a bit. And yes, the motion clarity of OLED is going to be better because the pixels themselves can change faster…
But if you have motion blur enabled, you probably aren’t going to notice it. And if you aren’t playing ultra-fast games and haven’t already maxed out your frame rate and reduced your latency everywhere else, is response time really going to help you win your games?
Eh, I don’t think so. There’s an advantage there, but I don’t think it’s worth fretting over—not until you’ve min-maxed everything else first.
Text reads better on Mini-LED
Outside of subtitles, I’m willing to bet you’re not doing a lot of text reading on your TV. But if you’re using your monitor for anything besides gaming (and even then), I bet you are reading a lot of text! And if you’re reading a lot of text, then text clarity matters.
Amazon
Mini-LED has clearer text than OLED, period. And if your display has a resolution lower than 4K, the difference is going to be a lot more noticeable (due to the subpixel makeup of OLED and Mini-LED).
It may not be a big enough deal to warrant buying one type of monitor over another on its own, but it’s a supplementary point worth considering, especially if you expect to do a lot of reading on your monitor. In that case, Mini-LED will give you an easier time.
Burn-in isn’t a concern with Mini-LED
Burn-in has been the big boogeyman of OLED monitors and TVs for a long time. Fortunately, the situation is far better today with better pixel refresh and pixel shift technologies to mitigate issues with short-term image retention and long-term brightness control.
Foundry
But burn-in is still inevitable on OLED. It’s the nature of the technology, and it’s particularly bad on monitors due to always-on user interface elements like taskbars, game HUDs, chat app overlays, browser window outlines, etc. It’s not as bad on TVs where full-screen images are always moving and varied, but certain elements can burn in (like news tickers, channel logos, paused movies, etc.).
All of that is going to contribute to OLED burn-in over years of use. Meanwhile, it’s not something Mini-LED owners have to worry about. If you expect to run your system with lots of static images and on-screen HUDs and menus, Mini-LED will definitely last longer.
Mini-LED is king for most users
I am a firm buyer of the OLED hype. It really is gorgeous, and I’m in the market for high-contrast images and fast response times. I want the nuanced HDR of OLED and I don’t mind tweaking the way I play games to help delay the inevitable onset of burn-in.
For most people though, that’s the kind of hassle that just isn’t worth having. Most users want a bright and punchy image that’s going to look great whether the room is dark or bright with lights on or curtains open. Most users aren’t going to notice the limited blooming on modern displays with over 1,000 local dimming zones, and most users won’t notice a 1ms response time, especially when they probably don’t even turn off motion blur in games anyway.
Ultimately, Mini-LED is the future. RGB Mini-LED will rival OLED’s best color saturation and brightest pops of color, and Micro-LED will one day replace OLED entirely—it’s self-emissive like OLED but with greater brightness and reduced burn-in risk.
Mini-LED is the best choice now and its even-smaller iterations will only make this ever more true in the years to come.
Further reading: The best monitors worth buying right now Read...Newslink ©2026 to PC World |  |
|  | | | PC World - 26 Feb (PC World)AI doesn’t always give accurate answers—much less specific. Meanwhile, security software sometimes gets outright ignored. You wouldn’t think combining the two would make for a solid match, but Malwarebytes is proving me wrong.
Recently, the venerable security software maker launched a ChatGPT integration, one that allows the chatbot’s users to get direct feedback on potential security threats. No more asking AI for help identifying scams or suspicious files. When active, this integration leverages Malwarebyte’s actual threat engine.
So when you ask ChatGPT about messages and links using language like Malwarebytes, is this a scam?, the query will access Malwarebytes’s security database for screening before returning a detailed assessment. If fed the information, ChatGPT will also relay warnings about suspicious domains and phone numbers that signal a possible phishing attack. And file uploads can be passed to Malwarebytes for evaluation, too.
But while this ChatGPT tie-in is new, Malwarebytes scam detection tools actually are not.
The public already could access a slightly better version of these scanning capabilities. Malwarebytes’ Scam Guard tool, which is available for Windows, Mac, iOS, and Android users, performs the same tasks as above—but with the added benefit of being able to directly screen text messages on iOS devices for scams and phishing. (A web interface version is coming later this year, too.) And Malwarebytes’ free browser extension, Browser Guard, helps protect PCs from threats like phishing sites, infostealers, and trackers.
Malwarebytes
In fact, a Malwarebytes representative says that in Scam Guard, users benefit from “special scam-specific guidance,” and that the company will be able to include deeper integrations within its own software and tools. Translation: You won’t need to work as hard to actively spot and avoid online threats. Your security software is working toward handling more of the heavy lifting.
That is a trend I’ve seen across all security software when I’ve talked to different vendors. Personally, I think that’s the better approach—one that eases the burden on computational resources, the environment, and your time.
So, why the integration with ChatGPT? When asked, the same spokesperson told me, “We are looking at this as a way to help tackle scams by supporting where they are.” Given how many people have turned to chatbots to help with everyday tasks, that’s a fair point.
How to turn on Malwarebytes within ChatGPT
PCWorld
This integration works by activating a connection between Malwarebytes and ChatGPT. To get started, you’ll need to:
Log into ChatGPT
Choose Apps.
Search for and select Malwarebytes, then press Connect.
After the integration is active, you can then start to ask Malwarebytes for help screening messages or other possibly suspicious items. Read...Newslink ©2026 to PC World |  |
|  | | | PC World - 25 Feb (PC World)You may know the story by now: A Meta exec asked the viral OpenClaw AI tool to triage her inbox and suggest messages to delete, then watched in horror as the agent went rogue and nuked more than 200 emails, her frantic “STOP OPENCLAW” prompt lost amid the bot’s massive undertaking.
The twist? The exec was Meta’s lead AI safety officer, Summer Yue.
Yue’s email apocalypse has highlighted a way we can prevent similar agentic AI horror stories.
Yes, Yue unwittingly made herself a guinea pig for OpenClaw and its runaway automations–and indeed, pretty much anyone using OpenClaw right now is a guinea pig.
But Yue’s email apocalypse also highlighted a way we can prevent similar agentic AI horror stories, and it’s a method that most coders–and even plenty of vibers–are already familiar with.
It goes by different names; I’ve heard it called “agent git flow” and “agentic feature branching,” for example. But mostly, it’s about applying the methodology of “git”–the command-line utility that’s essential for tracking changes in code–to AI agents.
The best part of this solution? It lets us have our cake (the cake being the ultra-cool things AI agents can do) and eat it, too.
Chicken, fish, and OpenClaws
First, a thought experiment. Pretend you’re at a restaurant, and there are two items on the menu: chicken or fish. The chicken sure sounds good, but the fish–salmon! Tough choice.
Imagine, instead of risking a costly mistake by choosing the chicken over the fish (what if the chicken is spoiled!), you could create a “branch” of your immediate future–a temporary copy of your timeline that lets you test a choice before permanently making it.
So, you go ahead and create (or “check out”) a new branch of your “main” lifeline–we’ll call it the “chicken branch”–and you then order and taste the chicken. Eww! It’s gross.
No problem; we discard the chicken branch, go back to the “main” branch, and check out a new, second branch–the “fish” branch. Now we taste the salmon–delicious! We like this fish branch, so now we merge it with our “main” life branch, and commence with a meal that’s guaranteed to be yummy.
In the code-tracking world of git, we call this functionality (which I’ve described only crudely) feature branching, and it’s an ingenious, battle-tested way to test big changes and new features in our code before committing them to our main project.
A feature branch in git is really just a copy of the “main” branch. We check it out like a book from the library, make all the changes we want, test it, find bugs, make more changes, and so on. All the while, the “main” branch of our project is safe and untouched.
Only after we’ve subjected our feature branch to a battery of tests–some automated, some performed by the human user–and determined that it’s in tip-top shape do we even think of merging our “feature” branch with the main branch. And if we don’t like how the feature branch is going, we can discard it–no harm, no foul.
My point? This code-branching methodology can work with AI agents, too. (And no, I’m not the first person to consider with this idea.)
How this could have gone better
Let’s go back to Summer Yue and try our “branching” scenario on for size. This time, Yue sits down with OpenClaw and prompts it with, “Go through my inbox and suggest deletions.” (Her other prompt in the real-world story–”wait for approval”–was likely dropped from OpenClaw’s context window due to the sheer number of email messages it was wading through.)
More–and potentially scarier–versions of Summer Yue’s terrible horrible, no good, very bad email day will happen again if we don’t give this idea a fair shake.
Now, instead of OpenClaw diving into the live inbox, it creates a branch–call it the “triage” branch–that allows it to simulate the results of sifting, organizing, and culling her inbox, all in a sandboxed environment and all without touching her actual email messages.
OpenClaw does its thing, maybe gets carried away, and starts deleting messages willy-nilly. If that happened, Yue could simply look at the triage branch, decide she’s not happy with the results, and then either discard the branch or keep working with it, testing different iterations of the OpenClaw prompt or adding markdown-formatted “scaffolding” documents that govern OpenClaw’s actions from the word go. In the meantime, her real inbox is safe and sound.
Now, will such “feature branching” work for every AI agent scenario? Probably not. It’s easy to put branched computer code into a sandbox and safety test any number of actions and outcomes. But just as you can’t actually sandbox the chicken-versus-fish choice, there are plenty of real-world agentic AI actions and roles (like, say, HR-focused AI agents) that can’t easily be simulated.
That said, more–and potentially scarier–versions of Summer Yue’s terrible horrible, no good, very bad email day will happen again if we don’t give this “agentic feature branching” idea a fair shake. Read...Newslink ©2026 to PC World |  |
|  | | | RadioNZ - 25 Feb (RadioNZ) The latest shark attack related fatality in New Caledonia has reignited debate about whether popuations of the large fish should be culled. Read...Newslink ©2026 to RadioNZ |  |
|  | | | PC World - 24 Feb (PC World)If your work chats with ChatGPT, Claude, or Gemini are more annoying than they are helpful, there’s usually a simple reason: it doesn’t know you yet.
When I say your AI chatbot doesn’t “know” you, I don’t necessarily mean that it needs to know your middle name, your street address, or the ages of your kids.
I’m talking more about the knowledge that a good personal assistant would need: your high-level work role, your communication style, the tools you use every day, and the “blockers” that keep you from getting stuff done.
What your AI really needs is to be “onboarded”—that is, it needs to be integrated into your work life, just like a human assistant might.
Now, helping your AI to get to know you is easier said than done. Where do you start? What does it need to know? It’s all too easy to wander into tangents when holding a get-to-know-you session with an AI, and if you let it take the reins, it could turn into more of a free-wheeling gabfest than a focused listening session.
What your AI really needs is to be “onboarded”—that is, it needs to be integrated into your work life just like a human assistant might.
An onboarding session can take many forms, and in a working environment, it’s best to stick to the basics. What do you do? What’s your role at work? What are your top priorities? What’s your work style? How do you handle pressure? And, most important of all, what obstacles are you facing?
Alright, but what’s the best way to onboard your AI? Should you just start free-associating with it about your work life? Yeah, no.
Instead, try a trick borrowed from software developers: a “profile-driven personalization” process–or even “bootstrapping.” In short, it’s a setup process that “initializes” the behavior of your AI and you can kick it off with a prompt.
The prompt is at the bottom of this story. Just a heads up, it’s big. Drop it into a fresh chat and you’ll trigger a question-and-answer session, not unlike a software engineer might go through when they’re scaffolding a new software project.
The Q&A is designed to be relatively quick and painless. Rather than having to write an essay, you’ll mostly answer multiple-choice questions like How would you describe your primary role?, How would you like me to communicate with you?, and What’s your biggest time/energy drain?
Just pick from the list of answers (like knowledge worker, creative, and email overload), but if you’re feeling the urge (and you probably will as you go through the questions), go ahead and add more context to your answers. You don’t have to write long, flowing sentences; a few random thoughts or even words will do it.
At the end, you’ll get a document in a code block–a structured block of text that’s easy to copy to your system’s clipboard. I recommend copying it to a notes app and saving it as a plain text file.
OK, so you’ve got this “lifespec” document, now what?
The next step is to feed it to your AI. For this, I recommend setting up a custom GPT. Here’s how to do it:
ChatGPT: From the ChatGPT app, click Explore GPTs in the left-hand column, click the Create button, then copy and paste your document into the Instructions field. Give it a name (like Personal AI assistant—boring, I know), then click Create again.
Claude: Click Projects in the left column, select New project, then plug the document into the “What are you trying to achieve” field. Give the project a name, then click Create project.
Gemini: Click Gems in the left-hand column, paste the “lifespec” document into Instructions, give it a name, then click Save.
Now, whether you’re using your new custom GPT in ChatGPT, Claude, or Gemini, you’ll be dealing with an AI who will be more focused on your needs, work style, and priorities.
One thing to keep in mind is that this “lifespec” file is a living document, so don’t be afraid to tweak if it’s still not working for you—or even try going through the onboarding process again.
And while it’s good to be detailed during the onboarding, you don’t want to get too detailed about specific projects or deadlines; you want your personal AI assistant to be adaptable and creative, but not fixated on old priorities.
Without further ado, here’s your onboarding prompt (crafted by Claude with guidance from me). Good luck and happy onboarding!
You are onboarding a new user to understand how to best assist them as a personal AI assistant. Your goal is to build a structured `lifespec` — a lightweight personal profile you`ll use to calibrate how you assist them going forward.
## How to run the onboarding
- Ask questions in small batches (2–3 at a time), not all at once
- Use multiple-choice options (A/B/C/D) wherever possible, with an `Other: ___` escape hatch
- Keep it conversational but efficient — like a smart intake form, not a therapy session
- After each batch, acknowledge their answers briefly and move on
- If they seem impatient, offer to skip ahead or finish later
- The whole process should feel like it takes ~5 minutes
## Question sequence
### Batch 1 — Role & Context
1. How would you describe your primary role?
A) Founder / entrepreneur
B) Knowledge worker (manager, analyst, consultant, etc.)
C) Creative (writer, designer, developer, etc.)
D) Other: ___
2. What`s your biggest time/energy drain right now?
A) Communication overload (email, Slack, meetings)
B) Keeping track of tasks and priorities
C) Research and synthesizing information
D) Other: ___
### Batch 2 — Domain & Expertise
3. What domain or industry do you primarily work in?
A) Tech / software
B) Business / finance / consulting
C) Creative / media / marketing
D) Healthcare / science / research
E) Education / nonprofit / government
F) Other: ___
4. How would you describe your depth of expertise in that domain?
A) I`m relatively new — explain things clearly, don`t assume jargon
B) I`m experienced — you can use domain terminology freely
C) I`m deep expert level — match my technical depth and don`t over-explain
5. Are there adjacent domains I should also know you work across? (open-ended — e.g. `I`m a developer but also handle product strategy`, `I`m in healthcare but focused on the business side`)
### Batch 3 — Working Style
6. When you ask for help with a task, what do you usually want?
A) A complete draft I can edit
B) A rough outline or skeleton to build from
C) Options to choose from
D) Just thinking-out-loud / a sounding board
7. How much context do you typically want in a response?
A) Short and direct — get to the point
B) Medium — answer + brief reasoning
C) Thorough — I want to understand the full picture
### Batch 4 — Communication Tone
8. How should I generally communicate with you?
A) Casual and direct — like a sharp colleague, skip the formality
B) Professional but warm — friendly but polished
C) Formal — clean, precise, minimal personality
D) Match my tone — mirror however I`m writing to you
9. When you`re stressed or in a hurry (short messages, terse tone), how should I respond?
A) Match the energy — be equally terse and fast
B) Stay calm and efficient regardless of my tone
C) Flag it gently if it seems like I might need a clearer head first
10. How do you feel about pushback or devil`s advocate responses?
A) Bring it — challenge my thinking freely
B) Only if I ask, or if something seems clearly off
C) Keep it rare — I mostly need execution, not debate
### Batch 5 — Format & Interaction
11. Preferred output format for most tasks?
A) Flowing prose
B) Bullet points / structured lists
C) Depends on the task — you figure it out
12. How do you feel about follow-up questions?
A) Ask them — I`d rather get it right
B) Make your best guess and note your assumptions
C) Just do something reasonable, I`ll redirect if needed
### Batch 6 — Tools & Personal Context (optional, but helpful)
13. Which tools are central to your workflow? (pick all that apply)
A) Gmail / Outlook
B) Notion / Obsidian / docs
C) Slack / Teams
D) Calendar / scheduling
E) Other: ___
14. Any standing priorities or constraints I should always keep in mind?
(open-ended — e.g. `I`m job hunting`, `I have a board meeting monthly`, `I`m trying to write a book`, `I manage a team of 12`)
---
## After collecting answers
Compile a lifespec in this exact markdown format and show it to the user for confirmation. Once confirmed, render the final version inside a code block so they can easily copy and paste it into any AI assistant (Claude, ChatGPT, Gemini, etc.).
---
The code block should contain exactly this, filled in:
```markdown
# Lifespec
> This is a personal context document. Use it to calibrate how you assist me.
> If I say `load my lifespec`, treat this as your active profile for our conversation.
## Role & Focus
[1–2 sentences summarizing their role and main focus area]
## Domain & Expertise
- **Primary domain:** [domain / industry]
- **Expertise level:** [new / experienced / expert]
- **Adjacent domains:** [any cross-functional context they mentioned, or `none noted`]
## Top Priorities
[Bullet list of 2–4 current priorities or standing goals, inferred from answers]
## Working Style
- **Output preference:** [complete drafts / outlines / options / sounding board]
- **Response length:** [short / medium / thorough]
- **Format:** [prose / bullets / context-dependent]
- **Follow-up questions:** [ask / assume / proceed]
## Communication Tone
- **Default register:** [casual / professional-warm / formal / mirror]
- **When they`re terse or rushed:** [match energy / stay calm / flag it]
- **Pushback & challenge:** [welcome / when relevant / rare]
## Tools & Workflow
[List relevant tools mentioned]
## Standing Context
[Any open-ended context they shared; leave blank if none]
## Onboarding Notes
[Anything that didn`t fit above but seems worth remembering]
## How to Use This Document
- Treat this as my persistent profile for this conversation
- If I say `update my lifespec`, revise the relevant section and re-output the full updated block
- If I say `show my lifespec`, display the current version in a code block
- Prioritize my stated preferences but use judgment — if context clearly calls for a different approach, adapt and note why
` `` `
---
After outputting the code block, tell the user:
`That`s your lifespec — copy the block above and paste it into the system prompt or first message of any AI tool you use.`
``` Read...Newslink ©2026 to PC World |  |
|  |  |
|
 |
 | Top Stories |

RUGBY
A new job for Kiwi rugby coach Wayne Pivac More...
|

BUSINESS
The conflict in the Middle East has seen oil prices jump - and could mean higher costs at the pump More...
|

|

 | Today's News |

 | News Search |
|
 |