Category robots in business

Page 153 of 525
1 151 152 153 154 155 525

A multi-camera differential binocular vision sensor for robots and autonomous systems

Recent technological advances have enabled the development of increasingly sophisticated sensors, which can help to advance the sensing capabilities of robots, drones, autonomous vehicles, and other smart systems. Many of these sensors, however, rely on individual cameras, thus the accuracy of the measurements they collect is limited by the cameras' field of view (FOV).

The Shift from Models to Compound AI Systems

AI caught everyone’s attention in 2023 with Large Language Models (LLMs) that can be instructed to perform general tasks, such as translation or coding, just by prompting. This naturally led to an intense focus on models as the primary ingredient in AI application development, with everyone wondering what capabilities new LLMs will bring. As more developers begin to build using LLMs, however, we believe that this focus is rapidly changing: state-of-the-art AI results are increasingly obtained by compound systems with multiple components, not just monolithic models.

For example, Google’s AlphaCode 2 set state-of-the-art results in programming through a carefully engineered system that uses LLMs to generate up to 1 million possible solutions for a task and then filter down the set. AlphaGeometry, likewise, combines an LLM with a traditional symbolic solver to tackle olympiad problems. In enterprises, our colleagues at Databricks found that 60% of LLM applications use some form of retrieval-augmented generation (RAG), and 30% use multi-step chains. Even researchers working on traditional language model tasks, who used to report results from a single LLM call, are now reporting results from increasingly complex inference strategies: Microsoft wrote about a chaining strategy that exceeded GPT-4’s accuracy on medical exams by 9%, and Google’s Gemini launch post measured its MMLU benchmark results using a new CoT@32 inference strategy that calls the model 32 times, which raised questions about its comparison to just a single call to GPT-4. This shift to compound systems opens many interesting design questions, but it is also exciting, because it means leading AI results can be achieved through clever engineering, not just scaling up training.

In this post, we analyze the trend toward compound AI systems and what it means for AI developers. Why are developers building compound systems? Is this paradigm here to stay as models improve? And what are the emerging tools for developing and optimizing such systems—an area that has received far less research than model training? We argue that compound AI systems will likely be the best way to maximize AI results in the future, and might be one of the most impactful trends in AI in 2024.

Read More

RoboTool enables creative tool use in robots

If an ingredient is out of reach on a high pantry shelf, it wouldn't take you more than a few seconds to find a step stool, or maybe just a chair, to stand on to bring the ingredient within your reach. This simple solution is the outcome of a complex problem-solving approach researchers call creative tool use.

Cult of the drone: At the two-year mark, UAVs have changed the face of war in Ukraine—but not outcomes

Unmanned aerial vehicles, or drones, have been central to the war in Ukraine. Some analysts claim that drones have reshaped war, yielding not just tactical-level effects, but shaping operational and strategic outcomes as well.
Page 153 of 525
1 151 152 153 154 155 525