Video shows how swarms of miniature robots simultaneously clean up microplastics and microbes
Emergency department packed to the gills? Someday, AI may help
Researchers use foundation models to discover new cancer imaging biomarkers
Scientists create robot snails that can move independently using tracks or work together to climb
The rise of modular robots and the importance of drive train design
Caterbot? Robatapillar? It crawls with ease through loops and bends
Engineers create a caterpillar-shaped robot that splits into segments, reassembles, hauls and crawls
Spring Launch ‘24: Meet DataRobot’s Newest Features to Confidently Build and Deploy Production-Grade GenAI Applications
The most inspiring part of my role is traveling around the globe, meeting our customers from every sector and seeing, learning, collaborating with them as they build GenAI solutions and put them into production. It’s thrilling to see our customers actively advancing their GenAI journey. But many in the market are not, and the gap is growing.
AI leaders are rightfully struggling to move beyond the prototype and experimental stage, it’s our mission to change that. At DataRobot, we call this the “confidence gap”. It’s the trust, safety and accuracy and concerns surrounding GenAI that are holding teams back, and we are committed to addressing it. And, it’s the core focus of our Spring ’24 launch and its groundbreaking features.
This release focuses on the three most significant hurdles to unlocking value with GenAI.
First, we’re bringing you enterprise-grade open-source LLM support, and a suite of evaluation and testing metrics, to help you and your teams confidently create production-grade AI applications. To help you safeguard your reputation and prevent risk from AI apps running amok, we’re bringing you real-time intervention and moderation for all your GenAI applications. And finally, to ensure your entire fleet of AI assets stay in peak performance, we’re bringing you a first-of-its-kind multi-cloud and hybrid AI Observability to help you fully govern and optimize all of your AI investments.
Confidently Create Production-Grade AI Applications
There is a lot of talk about fine-tuning an LLM. But, we have seen that the real value lies in fine-tuning your generative AI application. It’s tricky, though. Unlike predictive AI, which has thousands of easily accessible models and common data science metrics to benchmark and assess performance against, generative AI hasn’t—until now.
Unlike predictive AI, which has thousands of easily accessible models and common data science metrics to benchmark and assess performance against, generative AI hasn’t—until now.
In our Spring ’24 launch, get enterprise-grade support for any open-source LLM. We’ve also introduced an entire set of LLM evaluation, testing, and metrics. Now, you can fine-tune your generative AI application experience, ensuring their reliability and effectiveness.
Enterprise-Grade Open Source LLMs Hosting
Privacy, control, and flexibility remain critical for all organizations regarding LLMs.There has been no easy answer for AI Leaders who have been stuck with having to pick between vendor lock-in risks using major API-based LLMs that could become sub-optimal and expensive in the immediate future, figuring out how to stand up and host your own open source LLM, or custom-building, hosting, and maintaining your own LLM.
With our Spring Launch, you have access to the broadest selection of LLMs, allowing you to choose the one that aligns with your security requirements and use cases. Not only do you have ready-to-use access to LLMs from leading providers like Amazon, Google, and Microsoft, but you also have the flexibility to host your own custom LLMs. Additionally, our Spring ’24 Launch offers enterprise-level access to open-source LLMs, further expanding your options.
We have made hosting and using open-source foundational models like LLaMa, Falcon, Mistral, and Hugging Face easy with DataRobot’s built-in LLM security and resources. We have eliminated the complex and labor-intensive manual DevOps integrations required and made it as easy as a drop-down selection.
LLM Evaluation, Testing and Assessment Metrics
With DataRobot, you can freely choose and experiment across LLMs. We also give you advanced experimentation options, such as trying various chunking strategies, embedding methods, and vector databases. With our new LLM evaluation, testing, and assessment metrics, you and your teams now have a clear way of validating the quality of your GenAI application and LLM performance across these experiments.
With our first-of-its-kind synthetic data generation for prompt-and-answer evaluation, you can quickly and effortlessly create thousands of question-and-answer pairs. This lets you easily see how well your RAG experiment performs and stays true to your vector database.
We are also giving you an entire set of evaluation metrics. You can benchmark, compare performance, and rank your RAG experiments based on faithfulness, correctness, and other metrics to create high-quality and valuable GenAI applications.
And with DataRobot, it’s always about choice. You can do all of this as low code or in our fully hosted notebooks, which also have a rich set of new codespace functionality that eliminates infrastructure and resource management and facilitates easy collaboration.
Observe and Intervene in Real-Time
The biggest concern I hear from AI leaders about generative AI is reputational risk. There are already plenty of news articles about GenAI applications exposing private data and legal courts holding companies accountable for the promises their GenAI applications made. In our Spring ’24 Launch, we’ve addressed this issue head-on.
With our rich library of customizable guards, workflows, and notifications, you can build a multi-layered defense to detect and prevent unexpected or unwanted behaviors across your entire fleet of GenAI applications in real time.
Our library of pre-built guards can be fully customized to prevent prompt injections and toxicity, detect PII, mitigate hallucinations, and more. Our moderation guards and real-time intervention can be applied to all of your generative AI applications – even those built outside of DataRobot, giving you peace of mind that your AI assets will perform as intended.
Govern and Optimize Infrastructure Investments
Because of generative AI, the proliferation of new AI tools, projects, and teams working on them has increased exponentially. I often hear about “shadow GenAI” projects and how AI leaders and IT teams struggle to reign it all in. They find it challenging to get a comprehensive view, compounded by complex multi-cloud and hybrid environments. The lack of AI observability opens organizations up to AI misuse and security risks.
Cross-Environment AI Observability
We’re here to help you thrive in this new normal where AI exists in multiple environments and locations. With our Spring ’24 Launch, we’re bringing the first-of-its-kind, cross-environment AI observability – giving you unified security, governance, and visibility across clouds and on-premise environments.
Your teams get to work in the tools and ways they want; AI leaders get the unified governance, security, and observability they need to protect their organizations.
Our customized alerts and notification policies integrate with the tools of your choice, from ITSM to Jira and Slack, to help you reduce time-to-detection (TTD) and time-to-resolution (TTR).
Insights and visuals help your teams see, diagnose, and troubleshoot issues with your AI assets – Trace prompts to the response and content in your vector database with ease, See Generative AI topic drift with multi-language diagnostics, and more.
NVIDIA and GPU integrations
And, if you’ve made investments in NVIDIA, we’re the first and only AI platform to have deep integrations across the entire surface area of NVIDIA’s AI Infrastructure – from NIMS, to NeMoGuard models, to their new Triton inference services, all ready for you at the click of a button. No more managing separate installs or integration points, DataRobot makes accessing your GPU investments easy.
Our Spring ’24 launch is packed with exciting features, including GenAI, predictive capabilities, and enhancements in time series forecasting, multimodal modeling, and data wrangling.
All of these new features are available in cloud, on-premise, and hybrid environments. So, whether you’re an AI leader or part of an AI team, our Spring ’24 launch sets the foundation for your success.
This is just the beginning of the innovations we’re bringing you. We have so much more in store for you in the months ahead. Stay tuned as we’re hard at work on the next wave of innovations.
Get Started
Learn more about DataRobot’s GenAI solutions and accelerate your journey today.
- Join our Catalyst program to accelerate your AI adoption and unlock the full potential of GenAI for your organization.
- See DataRobot’s GenAI solutions in action by scheduling a demo tailored to your specific needs and use cases.
- Explore our new features, and connect with your dedicated DataRobot Applied AI Expert to get started with them.
The post Spring Launch ‘24: Meet DataRobot’s Newest Features to Confidently Build and Deploy Production-Grade GenAI Applications appeared first on DataRobot AI Platform.
NVIDIA and Alphabet’s Intrinsic Put Next-Gen Robotics Within Grasp
NVIDIA and Alphabet’s Intrinsic Put Next-Gen Robotics Within Grasp
Automate 2024 Product Preview
NREL Invites Robots To Help Make Wind Turbine Blades
They’re Multiplying Like Rabbits
Thousands of Free, ChatGPT Competitors Pop-Up on the Web
Thousands of free, alternative versions of a new AI engine released by Mark Zuckerberg of Facebook fame are popping-up on the Web.
The reason: Zuckerberg released his new AI engine — dubbed Llama 3 –as free, open source code that can be downloaded and altered by anyone interested in doing a little tinkering.
This is great news for consumers, given that thousands upon thousands of AI pros are coming up with competitive — and free — AI alternatives to proprietary AI solutions like ChatGPT.
That forces market leaders like OpenAI — the maker of ChatGPT — to continually develop ever-more-sophisticated versions of their tech.
And it makes it much tougher for OpenAI and similar proprietary companies to raise prices aggressively when thousands of free alternatives abound.
In other AI writing news and analysis:
*In-Depth Guide: ShortlyAI: Infinite Words on Tap: Techopedia has come out with its in-depth take on AI writer ShortlyAI.
The verdict: ShortlyAI is a relatively simplistic AI writer that features basic auto-writing.
And while ShortlyAI is missing the advanced functions of more cutting edge alternatives — such as the ability to write in your brand’s voice, change the tone of the writing or use writing templates — it’s extremely cheap.
Essentially, there are no limits on the amount of writing you can auto-generate with ShortlyAI, which bills at $64/monthly.
*Zuckerberg’s New AI Gets a Thumbs Up: Early users of a new AI engine from Facebook parent Meta — dubbed ‘Llama 3’ — are trending positive.
While Llama 3 still runs second to state-of-the-art tech like ChatGPT-4, it’s still good enough to give ChatGPT a run for its money.
In fact, Laura Wandel — a software engineer with 32,000+ followers — believes the performance gap between the two is “virtually nonexistent.”
*’Smart Docs’ — Now With a Whole New Meaning: The ability to use tech like ChatGPT to quickly source ‘just the insights you need’ from documents represents a paradigm-shift in written communication, according to writer John Bate.
With AI, lawyers and laymen alike no longer need to read through a painfully long legal document to distill the take-way they need, Bate says.
Observes Bate: “Faced with a very complex legal contract in an unfamiliar language, you are able to ask ‘What is this about?’, ‘Who are the contracting parties?’, ‘What is the expiry date?’ or ‘What are the penalty clauses for breach?’ — and receive a full answer in your own language.”
This advent of “self-aware, communicative enterprise documents could arguably become the most important advance in the automation of documents since the invention of the printing press,” Bate adds.
*ChatGPT Competitor Claude Goes Corporate: Close competitor to ChatGPT Claude is now available in an Enterprise version.
Dubbed Team, the business-enhanced version offers increased usage limits, administrative tools and the ability to simultaneously work with more data than less expensive versions.
Team runs $30/month, with a minimum of five users.
*AI News Chef Offers Bite-Sized Updates: Otherweb has rolled-out a new tool that answers a news question with a single, coherent summary and hotlinked references.
Dubbed ‘News Concierge,’ the AI tool works with 900+ news sources across 50+ countries.
Says Alex Fink, CEO, Otherweb: “Because Otherweb is a public benefit corporation, we are focused on information quality above all else.
“We are not trying to maximize your time in the app.
“Instead, we give you what you want to know right away.”
*DeepL to Grammarly: “Hold My Beer— I’ve Got This!” Writers looking for an alternative AI writing buddy may want to check-out DeepL Write Pro.
Observes Jarek Kutylowski, CEO, DeepL: “Unlike common generative AI tools that auto-populate text — or rules-based grammar correction tools — DeepL Write Pro acts as a creative assistant to writers in the drafting process.”
Essentially, the tool is designed to enhance the writing process with real-time, AI-powered suggestions on word choice, phrasing, style and tone, Kutylowski adds.
*AI-Automated Lawsuits? Oh Goodie!: Lawyers and others looking for a comprehensive view of the current impact of AI on the law will want to check-out this one-hour video.
Featuring two experts in AI and the law, the video examines:
~Beyond experiments: The real-world impact of AI on the law
~Legal economics: How AI is impacting the pricing and delivery of legal services
~Future outlook: What’s on the horizon for AI and the law
*I’ll Stay With Organic Writing, Thank You: Count neuroscientist Erik Hoel among those who view much of the writing and other media auto-created by AI with disgust.
Observes Hoel: “Increasingly, mounds of synthetic AI-generated outputs drift across our feeds and our searches.
“The stakes go far beyond what’s on our screens: The entire culture is becoming affected by AI’s runoff — an insidious creep into our most important institutions.”
AI Big Picture: AI’s Future: Tech Titans Spending Like It’s 1999: In the race to proliferate AI worldwide during the coming decade, tech giants like Amazon, Meta and Google are sparing no expense.
Observes writer Karen Weise: “Tens of billions of dollars are quickly being spent on behind-the-scenes technology for the industry’s AI boom.
“Nearly everyone with a foot in tech — or giant piles of money — it seems, is jumping into a spending frenzy that some believe could last for years.”
Share a Link: Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.
–Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.
The post They’re Multiplying Like Rabbits appeared first on Robot Writers AI.
AMD at 55: Still Going Strong with a Unique Advantage for the Coming AI Wave
AMD hit 55 this week, and it has been an amazing ride. AMD started out as the redundant supplier to Intel with x86 because IBM, like most companies at the time, didn’t want to be the sole source of a […]
The post AMD at 55: Still Going Strong with a Unique Advantage for the Coming AI Wave appeared first on TechSpective.