Page 94 of 431
1 92 93 94 95 96 431

3D printed robots with bones, ligaments, and tendons

3D printing is advancing rapidly, and the range of materials that can be used has expanded considerably. While the technology was previously limited to fast-curing plastics, it has now been made suitable for slow-curing plastics as well. These have decisive advantages as they have enhanced elastic properties and are more durable and robust.

Artificial sensor similar to a human fingerprint that can recognize fine fabric textures

An artificial sensory system that is able to recognize fine textures—such as twill, corduroy and wool—with a high resolution, similar to a human finger, is reported in a Nature Communications paper. The findings may help improve the subtle tactile sensation abilities of robots and human limb prosthetics and could be applied to virtual reality in the future, the authors suggest.

Introducing Titan, Amazon’s new mobile robot that can lift up to 2,500 pounds

Amazon is deploying a new robot to take on the extra-heavy lifting in its fulfillment centers. The new technology called Titan is a mobile robot, which means it will help carry products across our fulfillment centers, supporting safety and efficiency in our operations.

A system that allows robots to use tools creatively by leveraging large language models

Researchers at Carnegie Mellon University and Google DeepMind recently developed RoboTool, a system that can broaden the capabilities of robots, allowing them to use tools in more creative ways. This system, introduced in a paper published on the arXiv preprint server, could soon bring a new wave of innovation and creativity to the field of robotics.

Inch by inch, this machine is leading soft robotics to a more energy efficient future

Princeton researchers have developed a flexible, lightweight and energy efficient soft robot that moves without the use of any legs or rotary parts. Instead, the device uses actuators that convert electrical energy into vibrations that allow it to wiggle from point to point using only a single watt.

Ghostbuster: Detecting Text Ghostwritten by Large Language Models


The structure of Ghostbuster, our new state-of-the-art method for detecting AI-generated text.

Large language models like ChatGPT write impressively well—so well, in fact, that they’ve become a problem. Students have begun using these models to ghostwrite assignments, leading some schools to ban ChatGPT. In addition, these models are also prone to producing text with factual errors, so wary readers may want to know if generative AI tools have been used to ghostwrite news articles or other sources before trusting them.

What can teachers and consumers do? Existing tools to detect AI-generated text sometimes do poorly on data that differs from what they were trained on. In addition, if these models falsely classify real human writing as AI-generated, they can jeopardize students whose genuine work is called into question.

Our recent paper introduces Ghostbuster, a state-of-the-art method for detecting AI-generated text. Ghostbuster works by finding the probability of generating each token in a document under several weaker language models, then combining functions based on these probabilities as input to a final classifier. Ghostbuster doesn’t need to know what model was used to generate a document, nor the probability of generating the document under that specific model. This property makes Ghostbuster particularly useful for detecting text potentially generated by an unknown model or a black-box model, such as the popular commercial models ChatGPT and Claude, for which probabilities aren’t available. We’re particularly interested in ensuring that Ghostbuster generalizes well, so we evaluated across a range of ways that text could be generated, including different domains (using newly collected datasets of essays, news, and stories), language models, or prompts.

Read More

Asymmetric Certified Robustness via Feature-Convex Neural Networks

Asymmetric Certified Robustness via Feature-Convex Neural Networks

TLDR: We propose the asymmetric certified robustness problem, which requires certified robustness for only one class and reflects real-world adversarial scenarios. This focused setting allows us to introduce feature-convex classifiers, which produce closed-form and deterministic certified radii on the order of milliseconds.

diagram illustrating the FCNN architecture

Figure 1. Illustration of feature-convex classifiers and their certification for sensitive-class inputs. This architecture composes a Lipschitz-continuous feature map $\varphi$ with a learned convex function $g$. Since $g$ is convex, it is globally underapproximated by its tangent plane at $\varphi(x)$, yielding certified norm balls in the feature space. Lipschitzness of $\varphi$ then yields appropriately scaled certificates in the original input space.

Despite their widespread usage, deep learning classifiers are acutely vulnerable to adversarial examples: small, human-imperceptible image perturbations that fool machine learning models into misclassifying the modified input. This weakness severely undermines the reliability of safety-critical processes that incorporate machine learning. Many empirical defenses against adversarial perturbations have been proposed—often only to be later defeated by stronger attack strategies. We therefore focus on certifiably robust classifiers, which provide a mathematical guarantee that their prediction will remain constant for an $\ell_p$-norm ball around an input.

Conventional certified robustness methods incur a range of drawbacks, including nondeterminism, slow execution, poor scaling, and certification against only one attack norm. We argue that these issues can be addressed by refining the certified robustness problem to be more aligned with practical adversarial settings.

Read More
Page 94 of 431
1 92 93 94 95 96 431