Page 1 of 591
1 2 3 591

Robot hand approaches human-like dexterity with new visual-tactile training

Human hands are a wonder of nature and unmatched in the animal kingdom. They can twist caps, flick switches, handle tiny objects with ease, and perform thousands of tasks every day. Robot hands struggle to keep up. They typically miss the sense of touch, can't move many fingers at once, and lose track of what they are holding when their fingers block their camera's view. Scientists have now developed a smarter way to train a robot's brain to give its hands human-like dexterity.

“Robot, make me a chair”

Given the prompt “Make me a chair” and feedback “I want panels on the seat,” the robot assembles a chair and places panel components according to the user prompt. Image credit: Courtesy of the researchers.

By Adam Zewe

Computer-aided design (CAD) systems are tried-and-true tools used to design many of the physical objects we use each day. But CAD software requires extensive expertise to master, and many tools incorporate such a high level of detail they don’t lend themselves to brainstorming or rapid prototyping.

In an effort to make design faster and more accessible for non-experts, researchers from MIT and elsewhere developed an AI-driven robotic assembly system that allows people to build physical objects by simply describing them in words.

Their system uses a generative AI model to build a 3D representation of an object’s geometry based on the user’s prompt. Then, a second generative AI model reasons about the desired object and figures out where different components should go, according to the object’s function and geometry.

The system can automatically build the object from a set of prefabricated parts using robotic assembly. It can also iterate on the design based on feedback from the user.

The researchers used this end-to-end system to fabricate furniture, including chairs and shelves, from two types of premade components. The components can be disassembled and reassembled at will, reducing the amount of waste generated through the fabrication process.

They evaluated these designs through a user study and found that more than 90 percent of participants preferred the objects made by their AI-driven system, as compared to different approaches.

While this work is an initial demonstration, the framework could be especially useful for rapid prototyping complex objects like aerospace components and architectural objects. In the longer term, it could be used in homes to fabricate furniture or other objects locally, without the need to have bulky products shipped from a central facility.

“Sooner or later, we want to be able to communicate and talk to a robot and AI system the same way we talk to each other to make things together. Our system is a first step toward enabling that future,” says lead author Alex Kyaw, a graduate student in the MIT departments of Electrical Engineering and Computer Science (EECS) and Architecture.

Kyaw is joined on the paper by Richa Gupta, an MIT architecture graduate student; Faez Ahmed, associate professor of mechanical engineering; Lawrence Sass, professor and chair of the Computation Group in the Department of Architecture; senior author Randall Davis, an EECS professor and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as others at Google Deepmind and Autodesk Research. The paper was recently presented at the Conference on Neural Information Processing Systems.

Generating a multicomponent design

While generative AI models are good at generating 3D representations, known as meshes, from text prompts, most do not produce uniform representations of an object’s geometry that have the component-level details needed for robotic assembly.

Separating these meshes into components is challenging for a model because assigning components depends on the geometry and functionality of the object and its parts.

The researchers tackled these challenges using a vision-language model (VLM), a powerful generative AI model that has been pre-trained to understand images and text. They task the VLM with figuring out how two types of prefabricated parts, structural components and panel components, should fit together to form an object.

“There are many ways we can put panels on a physical object, but the robot needs to see the geometry and reason over that geometry to make a decision about it. By serving as both the eyes and brain of the robot, the VLM enables the robot to do this,” Kyaw says.

A user prompts the system with text, perhaps by typing “make me a chair,” and gives it an AI-generated image of a chair to start.

Then, the VLM reasons about the chair and determines where panel components go on top of structural components, based on the functionality of many example objects it has seen before. For instance, the model can determine that the seat and backrest should have panels to have surfaces for someone sitting and leaning on the chair.

It outputs this information as text, such as “seat” or “backrest.” Each surface of the chair is then labeled with numbers, and the information is fed back to the VLM.

Then the VLM chooses the labels that correspond to the geometric parts of the chair that should receive panels on the 3D mesh to complete the design.

These six photos show the Text to robotic assembly of multi-component objects from different user prompts. Credit: Courtesy of the researchers.

Human-AI co-design

The user remains in the loop throughout this process and can refine the design by giving the model a new prompt, such as “only use panels on the backrest, not the seat.”

“The design space is very big, so we narrow it down through user feedback. We believe this is the best way to do it because people have different preferences, and building an idealized model for everyone would be impossible,” Kyaw says.

“The human‑in‑the‑loop process allows the users to steer the AI‑generated designs and have a sense of ownership in the final result,” adds Gupta.

Once the 3D mesh is finalized, a robotic assembly system builds the object using prefabricated parts. These reusable parts can be disassembled and reassembled into different configurations.

The researchers compared the results of their method with an algorithm that places panels on all horizontal surfaces that are facing up, and an algorithm that places panels randomly. In a user study, more than 90 percent of individuals preferred the designs made by their system.

They also asked the VLM to explain why it chose to put panels in those areas.

“We learned that the vision language model is able to understand some degree of the functional aspects of a chair, like leaning and sitting, to understand why it is placing panels on the seat and backrest. It isn’t just randomly spitting out these assignments,” Kyaw says.

In the future, the researchers want to enhance their system to handle more complex and nuanced user prompts, such as a table made out of glass and metal. In addition, they want to incorporate additional prefabricated components, such as gears, hinges, or other moving parts, so objects could have more functionality.

“Our hope is to drastically lower the barrier of access to design tools. We have shown that we can use generative AI and robotics to turn ideas into physical objects in a fast, accessible, and sustainable manner,” says Davis.

AI and Taxes: How Technology Is Reshaping Financial Strategy

Artificial intelligence is transforming nearly every corner of the financial world, and tax strategy is no exception. What once required hours of manual calculations, paperwork, and guesswork can now be streamlined through intelligent systems capable of analyzing vast amounts of […]

The post AI and Taxes: How Technology Is Reshaping Financial Strategy appeared first on TechSpective.

The digital quant: instant portfolio optimization with JointFM

TL;DR

JointFM is the first AI foundation model for zero-shot joint distributional forecasting in multivariate time-series systems. By generating coherent future scenarios in milliseconds, it enables real-time portfolio decision-making without the lag of traditional numerical simulations. JointFM represents a paradigm shift in quantitative modeling: trained on an infinite stream of dynamics from synthetic stochastic differential equations (SDEs), JointFM acts as your digital quant.

Setting the stage: why quantitative modeling needs a new approach

Modeling complex systems has traditionally required a painful trade-off. Classical quant methods (like correlation copulas or coupled SDEs) offer high mathematical fidelity but are rigid, slow, and expensive. They often require specialized teams to rebuild models whenever the market regime or asset mix changes. Conversely, existing time-series foundation models offer speed and flexibility but are single-target, missing the critical cross-variable dependencies that define systemic risk.

JointFM is your digital quant to bridge this gap. Trained on an infinite stream of synthetic stochastic differential equations (SDEs), it learns the universal physics of time-series dynamics, making it truly domain-agnostic. Whether for a power grid or a stock portfolio, it predicts the full joint probability distribution of the system in milliseconds. This is the foundation of instant decision-making in highly complex setups and is fast enough to integrate with agents for ad-hoc business decisions.

image
Figure 1: JointFM is your digital quant, pre-trained with dynamics from synthetic quantitative models.

In this project, we demonstrate its power in quantitative finance, building on NVIDIA’s quantitative portfolio optimization blueprint. JointFM enables instant portfolio optimization (IPO), replacing brittle overnight batch processes with a digital quant that can rebalance portfolios in real time and adapt to new assets or market conditions without retraining.

Key takeaways 

  • The first zero-shot foundation model for joint distributions: JointFM predicts full multivariate distributions out of the box, capturing correlations and tail risk.
  • Instant simulation at portfolio scale: thousands of coherent future scenarios are generated in milliseconds, independent of portfolio complexity, enabling real-time decision-making and AI agent integration.
  • Matches the risk-adjusted returns of the classical benchmark: across 200 controlled synthetic trials, JointFM achieved equal risk-adjusted performance.
  • Pre-trained on synthetic stochastic processes: by learning from millions of generated dynamics, JointFM generalizes to new assets and market conditions without retraining.
  • From financial modeling to financial AI: JointFM replaces classical pipelines with a scalable, domain-agnostic foundation model.

The core challenge: speed, fidelity, and flexibility

In quantitative finance, portfolio managers have long faced a customized trilemma:

  1. Fast but flawed: models like Geometric Brownian Motion (GBM) are computationally cheap but assume normal distributions and constant correlations. They fail spectacularly during market crashes, when assets become highly correlated and fat tails appear.
  2. Accurate but slow: heavy Monte Carlo simulations with complex copulas or regime-switching variations capture reality better but take much longer to calibrate and run, making them impractical when you need to rebalance your portfolio on short notice.
  3. Rigid and expensive: developing high-fidelity models requires specialized quantitative modeling teams, significant time, and money. Worse, these models are often brittle; when the market regime shifts or you want to swap asset classes, you often need to start modeling again from scratch.

Enter JointFM: a foundation model for joint distributions

JointFM changes the game by “skipping” the modeling step. Instead of fitting parameters for each time series daily, JointFM is a pre-trained model that generalizes to unseen data out of the box. While we apply it here to financial markets, the model itself is domain-agnostic. It learns the language of stochastic processes, not just stock tickers.

The innovation

Until now, modeling joint distributions required significant compromises. You could define complex systems of SDEs (mathematically difficult), fit specialized classical models to specific datasets (slow and requiring retraining), or use copulas (bespoke and rigid). 

None of these are zero-shot

On the other hand, existing foundation models are zero-shot but fail to capture cross-variable dependencies. JointFM is the first to bridge this divide, offering the scale and zero-shot speed of a foundation model with the mathematical depth of a rigorous joint probability framework.

This zero-shot capability solves the rigidity problem. Facing a new market situation where you don’t know the underlying dynamics? Want to swap difficult-to-model assets instantly? JointFM works just the same. Because it has learned to predict future joint distributions from almost any dynamic during its diverse pre-training, it serves as the best possible starting point for unknown environments without the need for a dedicated quant team to build a new model from scratch.

Key capabilities

  • Joint distributional forecasting: unlike standard univariate time-series models that predict marginal probabilities for one variable at a time, JointFM explicitly models the full multivariate distribution of all variables simultaneously. In finance, this is critical for diversification. You cannot optimize a portfolio without understanding how assets move together.
  • Zero-shot inference: no training required on the user’s data. The model has already “seen it all” during pre-training.
  • Scenario slicing: the model can condition predictions on exogenous variables (e.g., “Show me the distribution of variables if an external factor rises”).

If you want to read more about time-series and tabular foundation models, have a look at this article on the brewing GenAI data science revolution, which gives an introduction to the field and explains why a model like JointFM is the next logical step.

Under the hood: architecture & speed

JointFM leverages a specialized transformer-based architecture designed to handle the unique high-dimensional constraints of multivariate time series.

1. Efficient high-dimensional context

To model portfolios with many assets over long history windows, JointFM moves beyond the quadratic complexity of standard attention mechanisms. Like other single-target models, JointFM employs a factored attention strategy that efficiently decouples temporal dynamics from cross-variable dependencies. This allows the model to scale linearly with the complexity of the portfolio, processing hundreds of assets without becoming a computational bottleneck.

2. Heavy-tailed distributional heads

Real-world data is rarely normal; it often exhibits heavy tails and skewness. JointFM utilizes a flexible output layer capable of parameterizing robust, fat-tailed multivariate distributions. This enables the model to naturally capture the probability of extreme events (“black swans”) that are critical for accurate risk assessment.

3. Parallel decoding for instant results

Speed is the central enabler of instant portfolio optimization. While also supporting an autoregressive mode, the model architecture is optimized for parallel decoding, allowing it to predict all future horizons simultaneously in a single forward pass. This capability—distinct from the slow, sequential generation of traditional autoregressive models—enables the generation of thousands of coherent market scenarios in milliseconds on a GPU.

The secret sauce: synthetic pre-training

Why does JointFM work so well on real data without seeing it? Synthetic pre-training.

Real historical data is often finite, noisy, and regime-specific. To build a truly general foundation model, JointFM is trained on an infinite curriculum of synthetic data generated by a flexible engine. We lead with finance because of its notoriously complex dynamics and its significance as a benchmark application for our work. However, while the domain is specialized, the core technology is universal.

  1. SDESampler: this is the core of the system. It generates complex stochastic differential equations (SDEs) with jumps, complex drifts, path-dependent memory, and regimes. It is designed to simulate any continuous-time system with stochastic components.
  2. FinanceSampler: to address the wide array of financial asset classes, we developed a specialized sampler that works alongside our generic engine. For the purpose of this simple benchmark comparison, we limited the selection to the most fundamental asset classes: equities, precious metals, and foreign exchange (FX).
  3. Custom extensibility: while we focused on finance, the same architecture allows us to build other samplers (e.g., for weather, energy, or sensor data) to target different domains.

This approach exposes the model to millions of regimes, ensuring it learns the fundamental physics of time-series dynamics rather than just memorizing historical patterns.

Performance evaluation: benchmarking against classical methods

We compared JointFM-optimized portfolios against classical Geometric Brownian Motion (GBM)-optimized portfolios as a simple baseline. Read about our experiment setup below, followed by the results.

Experimental setup 

Our portfolio optimization setup, while drawing inspiration from the NVIDIA blueprint, incorporates a few key differences. Similar to the blueprint, we utilize the same GBM simulation and Mean-CVaR optimization but use JointFM as an alternative scenario generator and our FinanceSampler as well as S&P 500 stock prices as input data.

image2
Figure 2: experiment architecture. This diagram illustrates the configuration for our primary experiment using synthetic data.
  1. Input:
    • Synthetic reality: We generate complex asset histories using the FinanceSampler (SDEs with stochastic volatility, correlated drifts, etc.). This ensures we have a ground-truth multiverse of future possibilities for objective evaluation.
    • Real data (secondary check): we also plug in real historical returns (S&P 500) to confirm the model generalizes to the noisy, imperfect real world.
  2. Inference:
    • GBM—classical SDE calibration and path generation from the NVIDIA blueprint.
    • JointFM—trained on similar but not identical synthetic physics—generates 10,000+ plausible future return scenarios in milliseconds. It effectively acts as a “future oracle” that intimately understands the statistical laws governing the assets.
  3. Risk optimization:
    • A Mean-CVaR (conditional value at risk) optimizer solves for the portfolio weights that maximize risk-adjusted returns (balancing expected return against tail risk).
  4. Execution and scoring:
    • We deploy the optimal weights into the known future:
      1. Synthetic ground-truth data provides thousands of scenarios for evaluation per experiment step.
      2. Real data has one known future for every historical experiment.

Speed: simulate the future instantly

JointFM generates scenarios in milliseconds, even orders of magnitude faster than relatively simple geometric Brownian motion (GBM) simulations.

image
Figure 3: comparison of simulation time. This figure illustrates the time required for GBM simulation versus the time required for JointFM prediction, with the time being dependent on the quantity of future samples used.

This architectural advantage enables timely reactions to market changes and makes it practical to integrate sophisticated simulation and portfolio optimization directly into an AI agent. As a result, investors can explore and discuss investment decisions in real time without additional operational overhead.

Performance on marginals: looking at one asset at a time

JointFM recovers the marginal distributions of complex assets to some extent. Below we show the Q-Q (quantile-quantile) plot for each percentile and two random assets of one anecdotal simulation/prediction. 

While we clearly aim to further improve the marginal predictability, there are two things here that are critical to understand:

  1. The dynamics of financial assets are notoriously hard to predict (here 63 days ahead).  
  2. Being good at making marginal predictions alone does not help with risk management very much. It is critical to capture asset correlations as well.
image4
Figure 4: anecdotal performance. Q-Q plots illustrating the two modeling approaches based on marginals.

Directly comparing high-dimensional joint probability distributions is impractical. Instead, we present a simple demonstration showing that JointFM provides consistent and reliable predictions for portfolio optimization, matching or exceeding the baseline quantitative method.

Portfolio evaluation (synthetic ground truth)

To rigorously evaluate performance, we conducted 200 repeated portfolio optimization trials using synthetic data in which the true future joint distributions are known. This controlled setting allows us to directly compare JointFM-generated portfolios and our baseline against the ground-truth optimum.

The results

  • Simple returns: JointFM portfolios achieved 1.17% higher returns on average.
  • Risk-adjusted returns: the Sharpe ratio is practically the same. JointFM shows a slightly better risk-adjusted return.
image
Figure 5: systematic comparison. The comparison highlights JointFM’s performance compared to GBM, assessed through simple returns (left) and risk-adjusted returns (Sharpe ratios on the right).

On the synthetic oracle data, the JointFM portfolio has a 1.17% higher return on average but at a roughly identical risk-adjusted return (Sharpe ratio), which means that the outperformance resulted from more risk-taking. Given its roughly identical performance in terms of risk-adjusted return, which is the more important metric, our first version of JointFM emerges as a fast, cheap, flexible, and simple drop-in alternative to the baseline approach.

Real-world sanity check

Addressing the potential concern that our model is only good at solving the specific synthetic problems it was trained on, we validated the approach on real S&P 500 data (Yahoo Finance). We randomly sampled 10 assets over 200 different time periods out of a universe of 391 different stocks from the S&P 500. 

The results

JointFM-portfolios, similar to their performance on the synthetic test datasets, showed a higher simple return. Their risk-adjusted return is approximately the same as the comparison, slightly outperforming it. This confirms that the model has learned generalizable rules of volatility and correlation, not just memorized a specific set of data-generating processes.

image
Figure 6. S&P 500 stock price data comparison. This figure compares JointFM and GBM performance on S&P 500 data, showing simple returns (left) and risk-adjusted returns (Sharpe ratios, right).

Wrapping up: instant portfolio optimization

By replacing rigid statistical assumptions with a flexible, pre-trained foundation model, JointFM enables a new class of trading and risk management agents. These agents don’t just react to price changes; they instantly re-simulate the future multiverse to find the best path forward. JointFM significantly accelerates inference by front-loading the extensive scientific modeling into the training stage. This allows for near-instantaneous inference execution.

This represents a shift from financial modeling (fitting equations) to financial AI (using foundation models), offering both the speed required for modern markets and the depth required for survival.

Should you have any questions, please contact us at research@datarobot.com.

The post The digital quant: instant portfolio optimization with JointFM appeared first on DataRobot.

AI robot vehicles learn to team up and extinguish fires in early trial

Fighting fires could be done remotely without the need to place firefighting crews directly in potentially dangerous situations by using collaborative teams of artificial intelligence-powered robots with extinguishing equipment on board, with an initial soft trial of the technology proving successful.

Zoom Upgrades Its AI

Wildly popular video meeting service Zoom is out with another AI upgrade – this time focused on beefing-up its AI agents.

Observes writer Craig Hale: “AI Companion is included with paid Zoom Workplace accounts — or it can be added separately to other plans.”

Free users can also get a taste of Zoom’s most advanced AI features — within monthly limitations set out by the company.

In other news and analysis on AI writing:

*Writers Can Now Use Claude to Analyze Their WordPress Web Sites: WordPress has released a new “connector” to Claude AI that will enable Web masters to use the AI to analyze and manipulate data associated with their WordPress sites.

Observes writer Lucas Ropek: “After Claude is linked to an account, users can ask the chatbot all sorts of questions about the site data that it’s been given access to — from summarizing the site’s monthly Web traffic to conducting analysis of which posts have low user engagement.”

*ChatGPT-Maker Snaps-Up OpenClaw Creator as New Hire: Peter Steinberger, creator of the virally popular OpenClaw AI agent, now works for OpenAI.

OpenClaw has triggered a sensation across the AI world for its ability to work in novel, imaginative – and highly independent ways – when completing multi-step tasks.

Observes writer Duncan Riley: “OpenAI gains not only technical expertise by hiring the creator of one of the most visible open-source agent frameworks, but also credibility within a developer community.”

*OpenClaw and Similar Destined to Re-Engineer the Corporation: Highly innovative and independent AI agents like OpenClaw are destined to re-imagine how corporations are designed and run, according to writer Carl Franzen.

Expect increasing numbers of coders, for example, to give OpenClaw and similar AI agents access to corporate systems – even though security concerns that go along with OpenClaw are extremely worrisome.

Also get ready for swarms of AI agents to complete tasks – rather than just one AI agent handling a task.

Plus, don’t be surprised when voice becomes the primary interface for your computing work, Franzen adds.

*Antrhopic’s Popular AI Agent ‘Cowork’ Now Available on Windows: The Microsoft crowd now has access to the Claude Cowork AI agent, which has been wowing Mac users for the past few weeks.

One of Cowork’s key benefits is its ability to access every single file in a folder when executing an independent task that requires a number of steps.

Observes writer Michael Nunez: “The relationship between Microsoft (maker of Windows) and Anthropic has accelerated with striking speed.”

*Google’s AI Upgrade Sets New Records: Google is once again soaring to new heights with its release of Gemini 3 Deep Think, an AI reasoning engine.

Specifically, the new AI scored 84.6% on its ability to learn new skills that could be applied to new tasks.

Observes writer Michael Sutter: “A score of 84.6% is a massive leap for the industry. To put this in perspective, humans average about 60% on these visual reasoning puzzles, while previous AI models often struggled to break 20%.”

*ChatGPT-Maker Answers Google’s Gains With Some of Its Own: OpenAI’s Deep Research tool is now using the more powerful GPT-5.2 AI engine from the company, according to writer Matthias Bastian.

Some key benefits with the move:

–Deep Research can be interrupted when veering off course and redirected in a more appropriate direction

–Deep Research’s reports can be displayed as full-screen size reports

–Deep Research’s progress can be tracked in real time

*Anthropic’s Safety Chief Quits: ChatGPT key competitor Anthropic lost its safety lead last week – Mrinank Sharma — who cited difficulty with achieving what he was hired to do there.

The move dripped with irony, given that Anthropic devotes significant effort marketing itself as a “safety first” AI company.

Anthropic is the maker of Claude, one of the most popular AI chatbots on the planet.

*China’s Open-Source AI Could Upend U.S. Market: MIT Technology Review is out with a new, in-depth article warning that the rising popularity of AI created by Chinese researchers and companies could scramble the U.S.’ current dominance in AI.

China’s open-source software is incredibly attractive to many researchers and companies, given that it can be downloaded for free – and custom-tailored or improved by anyone.

Observes writer Caiwei Chen: “If these open-source AI models keep getting better, they will not just offer the cheapest options for people who want access to frontier AI capabilities — they will change where innovation happens and who sets the standards.”

*AI BIG PICTURE: How to Get the Most From AI at Your Business: Ethan Mollick, co-director of Generative AI Labs, University of Pennsylvania, advises that maximizing AI success at your business requires:

–Top-down directive

–Encouraging the rank-and-file to experiment with AI on a daily basis

-Establishing an AI lab at your company to monitor and refine what employees have come up with – and then redistribute those insights for all to use

Click here for Mollick’s in-depth game plan.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Zoom Upgrades Its AI appeared first on Robot Writers AI.

Brain inspired machines are better at math than expected

Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information.

Robot Talk Episode 144 – Robot trust in humans, with Samuele Vinanzi

Claire chatted to Samuele Vinanzi from Sheffield Hallam University about how robots can tell whether to trust or distrust people.

Samuele Vinanzi is a Senior Lecturer in Robotics and Artificial Intelligence at Sheffield Hallam University. He specializes in Cognitive Robotics: an interdisciplinary field that integrates robotics, artificial intelligence, cognitive science, and psychology to create robots that perceive, reason, and interact like humans. His research focuses on enabling social collaboration between humans and robots, particularly emotional intelligence, intention reading, and artificial trust. His recent book, “In Robots We Trust“, explores trust relationships between humans and robots.

The insect-inspired bionic eye that sees, smells and guides robots

The compound eyes of the humble fruit fly are a marvel of nature. They are wide-angle and can process visual information several times faster than the human eye. Inspired by this biological masterpiece, researchers at the Chinese Academy of Sciences have developed an insect-scale compound eye that can both see and smell, potentially improving how drones and robots navigate complex environments and avoid obstacles.
Page 1 of 591
1 2 3 591