There’s plenty of hand-waving around AGI. DeepMind hopes to change that with a new, more rigorous approach.
Few terms are as closely associated with AI hype as artificial general intelligence, or AGI. But Google DeepMind researchers have now proposed a framework that could more concretely measure how close models are to this tech industry holy grail.
Artificial general intelligence refers to a mythical AI system that can match the general and highly adaptable form of intelligence found in humans. As the number of tasks that large language models can tackle has rocketed in recent years, thereâs been a growing chorus of voices suggesting the technology is creeping ever closer to this threshold.
But so far, there’s been no clear way to assess progress toward AGI, leaving plenty of room for speculation and exaggeration. To address this gap, a team from Google DeepMind has introduced a new cognitively inspired framework that deconstructs general intelligence into 10 key faculties. More importantly, they propose a way to evaluate AI systems across these key capabilities and compare their performance to humans.
“Despite widespread discussion of AGI, there is no clear framework for measuring progress toward it. This ambiguity fuels subjective claims, makes it difficult to track progress, and risks hindering responsible governance,â the researchers write in a paper outlining their new approach. âWe hope this framework will provide a practical roadmap and an initial step toward more rigorous, empirical evaluation of AGI.â
This isn’t DeepMind’s first attempt to clarify the term. In 2023, the company proposed separating AI systems into different levels of capability, in much the same way self-driving systems are categorized.
But the approach didnât really propose a way to measure what level AI systems have reached. The new framework goes further by building a firmer conceptual footing for the key aspects underpinning model performance and a practical way to evaluate and compare systems.
Digging through decades of research in psychology, neuroscience, and cognitive science, the researchers identify eight basic cognitive building blocks that they say make up general intelligence.
These include the perception of sensory inputs and generation of outputs like text, speech, or actions. Add to those learning, memory, reasoning, and the ability to focus attention on specific information or tasks. Rounding out the list are metacognitionâor the ability to reason about and control your own mental processesâand so-called executive functions, like planning and the inhibition of impulses.
The researchers also outline two “composite faculties” that require several building blocks to be applied together. These are problem solving and social cognition, which refers to the ability to understand and react appropriately to the social context.
To judge how well AI systems perform on each measure, the researchers suggest subjecting them to a broad suite of cognitive evaluations that target each specific ability. They also propose collecting human baselines for each task. This would involve asking a demographically representative sample of adults with at least a high school education to complete them under identical conditions.
The results of these tests can then be combined to create “cognitive profiles” that give a sense of a modelâs strengths and weaknesses. And by comparing the results against the human baselines, it should be possible to determine when a system matches or surpasses the general intelligence of an average person.
Crucially, the framework focuses on what a system can do rather than how it does it, which means the evaluation is agnostic about the underlying technology. However, the researchers concede that there is currently no good way to measure many of the core cognitive capabilities identified.
While there are already well-established benchmarks for faculties like problem solving and perception, there are no reliable tests for things like metacognition, attention, learning, and social cognition. In addition, many of the best benchmarks are public, which means the testing criteria are easily accessible and may have already been included in model training data. So the authors say theyâre working with academics to build more robust, non-public evaluations to fill the gaps.
How useful the new framework will be depends on several factors. First, it remains to be seen whether the criteria identified by the DeepMind team truly capture the essence of human general intelligence. Second, they need to prove that acing this test actually leads to better performance on practical problems compared to narrower, specialist AI systems.
But considering the hand-waving nature of the debate around AGI so far, any framework grounded in well-established cognitive theory and rigorous evaluation represents a significant step forward.
The post Google DeepMind Plans to Track AGI Progress With These 10 Traits of General Intelligence appeared first on SingularityHub.
Eclipse Memory Analyzer (MAT) can be a bit tricky to use when you’re trying to identify which part of your code produced an .hprof file or caused a memory leak. The steps below help you quickly locate the objects responsible for excessive memory usage.
Open the Leak Suspects Report to see automatically detected memoryâleak patterns. MAT highlights the largest retained objects and shows the reference chains that keep them alive. This helps you identify the classes, collections, or static fields in your code that are most likely causing the leak.1
It might help you to find the code responsible for the leaks with the stacktrace information.
Open the Histogram view to identify the largest objects in memory. This view helps you spot suspicious classes or instances that may indicate a leak.

This shows you how the selected objects are being kept alive in memory, even through soft or weak references.


Then rightâclick it and select Java Basics â Thread Stack. This helped me identify the issue and stacktrace responsible for the memory leak.
If you suspect a specific thread from your logs (e.g., http-nio-8080-exec-1) is involved in a memory leak, you can use MAT to inspect what that thread is retaining.
http-nio-8080-exec-1 MAT will show the corresponding TaskThread instance:Codeorg.apache.tomcat.util.threads.TaskThread @ <address>If youâre considering Microsoft 365 Copilot Business, or planning your next renewal, now is the time to act.
Starting July 1, 2026, Microsoft will update pricing for Microsoft 365 + Copilot Business bundles. Purchasing now lets you lock in todayâs lower prices and take advantage of limited time promotional discounts available through June 30, 2026.
When we introduced Copilot Business last December, we designed it specifically for small and medium-sized businesses. Alongside it, we launched simplified Microsoft 365 + Copilot Business bundles, making it easier and more affordable for SMBs to get a complete, secure AI solution.
Copilot Business brings the power of AI directly into the Microsoft 365 apps your team already uses: Word, Excel, PowerPoint, Outlook, and more.
Pricing Update Effective July 1, 2026
Beginning July 1, Microsoft 365 pricing will increase for some products, while Copilot Business remains $21 per user, per month.
|
Microsoft 365 + Copilot Bundle |
Current Price |
New Price |
|
Business Basic + Copilot Business |
$27 |
$28 |
|
Business Standard + Copilot Business |
$33.50 |
$35 |
|
Business Premium + Copilot Business |
$43 |
$43 |
USD list prices per user, per month, paid yearly. See terms and conditions.
For a limited time, you can save up to 35% on select Microsoft 365 Copilot Business bundles:
|
Microsoft 365 + Copilot Bundle |
Promotional Price |
Regular Price |
Savings |
|
Business Standard + Copilot Business |
$22 |
$33.50 |
~35% off
|
|
Business Premium + Copilot Business |
$32 |
$43 |
~25% off |
|
Copilot Add-on |
$18 |
$21 |
~15% off |
USD list prices per user, per month, paid yearly. See terms and conditions.
Bottom line: If youâre planning to adopt or expand your deployment of Copilot Business, purchasing before July 1 helps you maximize savings and avoid upcoming price increases. Check out these bundles at our website.
Security remains a top concern for growing businesses, especially with limited IT resources. To help, Microsoft offers optional add-ons that deliver enterprise grade protection at SMB-friendly pricing:
Defender Suite â $10/user/month (paid yearly)
Includes identity protection, endpoint detection and response, plus email and cloud app securityâhelping reduce phishing, malware, and shadow IT risk.
Purview Suite â $10/user/month (paid yearly). Get 50% off Microsoft Purview Suite for Microsoft 365 Business Premium when you purchase Microsoft 365 Copilot. See terms and conditions.
Provides governance and compliance tools like data loss prevention, eDiscovery, audit, message encryption, and AI-aware data security posture management.
Defender + Purview Suite Bundle â $15/user/month
Combines both suites for comprehensive protection against cyber threats and accidental data leaks.
Learn more about these security suites here.
Copilot Business continues to evolve, helping SMBs do more with less. Earlier this month we announced Wave 3 of Microsoft 365 Copilot, bringing new capabilities designed to handle more complex, real-world work:
These capabilities are included in Copilot Business today.
If Microsoft 365 Copilot Business is part of your plans, now is the best time to act. Lock in current pricing, take advantage of limited time promotions, and give your team AI that helps them work smarterâsecurely.
To learn more, visit our website or read this post and FAQ: Introducing Microsoft 365 Copilot Business: Empowering Small and Medium Businesses with AI
Every commit from Copilot coding agent, our cloud-based background agent, is authored by Copilot, with the human who gave Copilot the task marked as the co-author. This makes it easier to identify code generated by the agent and who started the task.
Now, the agent’s commits link back to the agent session logs by including an Agent-Logs-Url trailer in the commit message.
This gives you a permanent link from agent-authored commits back to the full session logs, so you can understand why Copilot made a change during code review or trace it later for auditing purposes.
To learn more, see “Tracking GitHub Copilot’s sessions” in the documentation.
Copilot coding agent is available to Copilot Pro, Copilot Pro+, Copilot Business, and Copilot Enterprise subscribers. If you’re a Copilot Business or Copilot Enterprise subscriber, an administrator will have to enable Copilot coding agent from the “Policies” page before you can use it.
The post Trace any Copilot coding agent commit to its session logs appeared first on The GitHub Blog.
An expensive mistake:
Someone jumped at the opportunity to steal $4.4 million in crypto assets after South Korea’s National Tax Service exposed publicly the mnemonic recovery phrase of a seized cryptocurrency wallet.
The funds were stored in a Ledger cold wallet seized in law enforcement raids at 124 high-value tax evaders that resulted in confiscating digital assets worth 8.1 billion won (currently approximately $5.6 million).
When announcing the success of the operation, the agency released photos of a Ledger device, a popular hardware wallet for crypto storage and management.
However, the images also showed a handwritten note of the wallet recovery phrase, which serves as the master key that allows restoring the assets to another device.
The authorities failed to redact that info, allowing anyone to transfer into their account the assets in the cold wallet.
Reportedly, shortly after the press release was published, 4 million Pre-Retogeum (PRTG) tokens, worth approximately $4.8 million at the time, were transferred out of the confiscated wallet to a new address.