News from All Over the Web
|
|
Home >
News >
NewsLinks

Search results for 'Education' - Page: 2
| | RadioNZ - 18 Nov (RadioNZ) Vocational Education Minister Penny Simmonds says she is seeking extra funding so new industry bodies can investigate the low completion rates. Read...Newslink ©2025 to RadioNZ |  |
|  | | | RadioNZ - 18 Nov (RadioNZ) The World Indigenous Peoples` Conference on Education has returned to Aotearoa for the first time in 20 years. Read...Newslink ©2025 to RadioNZ |  |
|  | | | PC World - 18 Nov (PC World)If you’ve ever asked ChatGPT a question only to receive an answer that reads well but is completely wrong, then you’ve witnessed a hallucination. Some hallucinations can be downright funny (i.e. the Wright brothers invented the atomic bomb), others can be a bit disturbing (for example when medical information is messed up).
What makes it a hallucination is the fact that the AI doesn’t know that it is making anything up, it’s confident in its answer and just goes on per normal.
Unlike human hallucinations, it’s not always easy to know when an AI is hallucinating. There are some fundamental things you need to know about AI hallucinations if you’re going to spot them.
What is an AI hallucination: The definition
An AI hallucination is when an AI model produces outputs that are factually incorrect, logically inconsistent or completely made up. These hallucinations are mostly found in generative AI models, specifically Large Language Models (LLMs) like ChatGPT.
Unlike programming bugs in software, AI hallucinations are not the result of a mistake by a programmer but rather come from a model’s learned probabilities. Here’s how to spot the different kinds of hallucinations.
You see facts that are incorrect
Factual hallucinations occur when an AI model produces information that is incorrect or unsubstantiated. An example would be “The Eiffel tower in Paris was built in 1999.” In reality it was built between 1887 and 1889. They come about due to limitations in the model’s training data or ability to check facts.
These hallucinations can be particularly dangerous in the fields of law, education, and healthcare, where factual information is imperative.
You get an answer not related to a question
If an answer deviates too much from a question or breaks the logical flow of a conversation, then the AI is having a contextual hallucination. An example would be a question “How do I make stew?” followed by the answer: “Stew is tasty, and there are nine planets in the solar system.” This produces an output that is linguistically correct, but irrelevant to the topic.
This type of hallucination occurs when the model fails to preserve previous context.
You receive an answer that seems logically invalid
If the logic of an answer is all askew, then the AI is having a logical hallucination. An example of this would be a statement like, “If Barbara has three cats and gets two more, she has 6 cats.” Clearly the logic fails here — the AI has failed at a task that requires simple math and reasoning. This can be a big problem for tasks that require problem solving.
Pexels: Matheus Bertelli
You notice a mismatch across AI modalities
These types of hallucinations known as multimodal hallucinations occur in AI models that interpret multiple types of media. One example would be when a description doesn’t match an image. For example, a prompt to “ask for an image of a monkey wearing sunglasses” produces an image of a monkey without any sunglasses. These are the type you’d see in image generation AI models such as DALL E.
How to test for a potential hallucination
Hallucinations erode trust and can be quite dangerous in some circumstances — for example, when professionals are relying on the AI for correct factual answers.
You can’t always tell if a hallucination is happening, but you can perform some checks to help you find out. Here’s what to do:
Manually fact check
Use search engines and trusted reference materials to check specific claims, names, dates, or numbers provided by the AI. If the AI cites sources, try to look them up. Fabricated or inaccurate source links are a common sign of hallucination.
Use follow-up questions
Ask the AI to elaborate on a specific detail it provided. If it struggles or introduces new, inconsistent facts, the original detail may have been invented.
Ask for justification
Ask the AI, “Can you provide a source for that?” or ask, “How confident are you in this answer?” A good model might point to its training data or search results; a model that’s hallucinating may struggle to back up the claim or invent a plausible-sounding source.
Cross-compare models
Ask a different AI model the exact same question. If the answers are wildly different, it suggests at least one model is incorrect.
Related content
Fake AI images are flooding the Internet. Here’s how to recognize them
Can you spot a poisoned AI chatbot? 4 tips from a Microsoft security expert
I started ‘vibe coding’ my own apps with AI and I’m utterly loving it Read...Newslink ©2025 to PC World |  |
|  | | | RadioNZ - 17 Nov (RadioNZ) About 3000 people were welcomed by Ngati Whatua Orakei for the World Indigenous Peoples` Conference on Education 2025. Read...Newslink ©2025 to RadioNZ |  |
|  | | | RadioNZ - 17 Nov (RadioNZ) The Ministry of Education is reviewing Gloriavale Christian School`s response to officials` concerns. Read...Newslink ©2025 to RadioNZ |  |
|  | | | Stuff.co.nz - 17 Nov (Stuff.co.nz) Parents say they’re frustrated by what they say is a lack of communication from one of the sellers of a child’s toy that has been recalled due to asbestos contamination. Read...Newslink ©2025 to Stuff.co.nz |  |
|  | | | Stuff.co.nz - 17 Nov (Stuff.co.nz) The principal of NZ’s oldest school fronts for a rare interview, as education officials sound the alarm over safety of his hostel students. Read...Newslink ©2025 to Stuff.co.nz |  |
|  | | | PC World - 15 Nov (PC World)It’s been just over a month since Microsoft released Windows 11 25H2 and anyone who wants to install the major Fall 2025 update can do so. But what if you’d rather hold off instead? Turns out, Microsoft is now forcibly installing the big update on Windows 11 PCs running on older versions that have reached end of support.
Notably, that means Windows 11 23H2, which officially ended support on November 11th. Anyone who’s still on Windows 11 23H2 will be forced to update their system to Windows 11 25H2. And if you’re somehow still on an even older version, like Windows 11 22H2 or 21H2? Yup, Microsoft is also forcing the update on your system, too.
Here’s how the forced update works
If you’re currently on a PC running Windows 11 23H2, 22H2, or 21H2, your system will automatically receive the bump up to Windows 11 25H2 via Windows Update. Microsoft will not ask your permission before downloading and installing this year’s big update.
This forced update will only be applied to Windows 11 Home and Pro. Windows 11 PCs running Enterprise versions that are centrally managed are exempt. Windows 11 PCs on Education versions—mainly computers in schools and universities—are also exempt.
If you’re offered the automatic update, you can set when your system should perform the necessary restart, or you can postpone the update up to a maximum of a few weeks. The update can’t be permanently avoided. Learn more about taking control of Windows updates.
Windows 10 users aren’t affected
Microsoft is not forcing Windows 11 25H2 (or any other version of Windows 11) on PCs that are still running Windows 10, whether those PCs are enrolled in the ESU program or not.
That means if you’re on Windows 10 and are successfully getting extended security updates, you’re safe for now. If you haven’t enrolled in the ESU program yet, your system is at growing risk of malware threats and hack attempts, and you should consider upgrading your PC to Windows 11 or maybe even trying another operating system. Read...Newslink ©2025 to PC World |  |
|  | | | ITBrief - 14 Nov (ITBrief) Samsung`s Knox Security framework delivers built-in protection across devices, boosting security and management in education, retail, and government sectors. Read...Newslink ©2025 to ITBrief |  |
|  | | | Stuff.co.nz - 14 Nov (Stuff.co.nz) The Ministry of Education has confirmed the number of schools and early learning centres in contact over asbestos-contamination concerns has doubled - with seven closing. Read...Newslink ©2025 to Stuff.co.nz |  |
|  |  |
|
 |
 | Top Stories |

RUGBY
The All Blacks are banged up, but most players are available for what is likely to a vastly changed side to tackle Wales this weekend More...
|

BUSINESS
Investor confidence is inching upward, but housing is no longer the clear favourite More...
|

|

 | Today's News |

 | News Search |
|
 |
 |
|
 |