Page 2 of 435
1 2 3 4 435

NVIDIA NeMo: Why IT Will Become Better at Onboarding than HR and People Are Becoming Obsolete

NVIDIA had another big event at CES this year that featured CEO Jensen Huang’s keynote address. There were a lot of amazing things in Huang’s keynote address, but one thing really caught my attention and that was NeMo, an offering […]

The post NVIDIA NeMo: Why IT Will Become Better at Onboarding than HR and People Are Becoming Obsolete appeared first on TechSpective.

3D printing strategy can upgrade soft robots and extend their lifespan

Over the past decades, robotic systems have been rapidly advancing, fueled by the continuous introduction of more advanced electronics, mechanical components and software solutions. As a result, robots can easily become obsolete and outdated as newer systems emerge.

A new research program is Indigenizing artificial intelligence

A new initiative is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence -- especially those originating from Indigenous cultures. Abundant Intelligences is an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.

Why your AI investments aren’t paying off

We recently surveyed nearly 700 AI practitioners and leaders worldwide to uncover the biggest hurdles AI teams face today. What emerged was a troubling pattern: nearly half (45%) of respondents lack confidence in their AI models.

Despite heavy investments in infrastructure, many teams are forced to rely on tools that fail to provide the observability and monitoring needed to ensure reliable, accurate results.

This gap leaves too many organizations unable to safely scale their AI or realize its full value. 

This isn’t just a technical hurdle – it’s also a business one. Growing risks, tighter regulations, and stalled AI efforts have real consequences.

For AI leaders, the mandate is clear: close these gaps with smarter tools and frameworks to scale AI with confidence and maintain a competitive edge.

Why confidence is the top AI practitioner pain point 

The challenge of building confidence in AI systems affects organizations of all sizes and experience levels, from those just beginning their AI journeys to those with established expertise. 

Many practitioners feel stuck, as described by one ML Engineer in the Unmet AI Needs survey:  

“We’re not up to the same standards other, larger companies are performing at. The reliability of our systems isn’t as good as a result. I wish we had more rigor around testing and security.”

This sentiment reflects a broader reality facing AI teams today. Gaps in confidence, observability, and monitoring present persistent pain points that hinder progress, including:

  • Lack of trust in generative AI outputs quality. Teams struggle with tools that fail to catch hallucinations, inaccuracies, or irrelevant responses, leading to unreliable outputs.
  • Limited ability to intervene in real-time. When models exhibit unexpected behavior in production, practitioners often lack effective tools to intervene or moderate quickly.
  • Inefficient alerting systems. Current notification solutions are noisy, inflexible, and fail to elevate the most critical problems, delaying resolution.
  • Insufficient visibility across environments. A lack of observability makes it difficult to track security vulnerabilities, spot accuracy gaps, or trace an issue to its source across AI workflows.
  • Decline in model performance over time. Without proper monitoring and retraining strategies, predictive models in production gradually lose reliability, creating operational risk. 

Even seasoned teams with robust resources are grappling with these issues, underscoring the significant gaps in existing AI infrastructure. To overcome these barriers, organizations – and their AI leaders – must focus on adopting stronger tools and processes that empower practitioners, instill confidence, and support the scalable growth of AI initiatives. 

Why effective AI governance is critical for enterprise AI adoption 

Confidence is the foundation for successful AI adoption, directly influencing ROI and scalability. Yet governance gaps like lack of information security, model documentation, and seamless observability can create a downward spiral that undermines progress, leading to a cascade of challenges.

When governance is weak, AI practitioners struggle to build and maintain accurate, reliable models. This undermines end-user trust, stalls adoption, and prevents AI from reaching critical mass. 

Poorly governed AI models are prone to leaking sensitive information and falling victim to  prompt injection attacks, where malicious inputs manipulate a model’s behavior. These vulnerabilities can result in regulatory fines and lasting reputational damage. In the case of consumer-facing models, solutions can quickly erode customer trust with inaccurate or unreliable responses. 

Ultimately, such consequences can turn AI from a growth-driving asset into a liability that undermines business goals.

Confidence issues are uniquely difficult to overcome because they can only be solved by highly customizable and integrated solutions, rather than a single tool. Hyperscalers and open source tools typically offer piecemeal solutions that address aspects of confidence, observability, and monitoring, but that approach shifts the burden to already overwhelmed and frustrated AI practitioners. 

Closing the confidence gap requires dedicated investments in holistic solutions; tools that alleviate the burden on practitioners while enabling organizations to scale AI responsibly. 

Confident AI teams start with smarter AI governance tools

Improving confidence starts with removing the burden on AI practitioners through effective tooling. Auditing AI infrastructure often uncovers gaps and inefficiencies that are negatively impacting confidence and waste budgets.

Specifically, here are some things AI leaders and their teams should look out for: 

  • Duplicative tools. Overlapping tools waste resources and complicate learning.
  • Disconnected tools. Complex setups force time-consuming integrations without solving governance gaps.  
  • Shadow AI infrastructure. Improvised tech stacks lead to inconsistent processes and security gaps.
  • Tools in closed ecosystems: Tools that lock you into walled gardens or require teams to change their workflows. Observability and governance should integrate seamlessly with existing tools and workflows to avoid friction and enable adoption.

Understanding current infrastructure helps identify gaps and informs investment plans. Effective AI platforms should focus on: 

  • Observability. Real-time monitoring and analysis and full traceability to quickly identify vulnerabilities and address issues.
  • Security. Enforcing centralized control and ensuring AI systems consistently meet security standards.
  • Compliance. Guards, tests, and documentation to ensure AI systems comply with regulations, policies, and industry standards.

By focusing on governance capabilities, organizations can make smarter AI investments, enhancing focus on improving model performance and reliability, and increasing confidence and adoption. 

Global Credit: AI governance in action

When Global Credit wanted to reach a wider range of potential customers, they needed a swift, accurate risk assessment for loan applications. Led by Chief Risk Officer and Chief Data Officer Tamara Harutyunyan, they turned to AI. 

In just eight weeks, they developed and delivered a model that allowed the lender to increase their loan acceptance rate — and revenue — without increasing business risk. 

This speed was a critical competitive advantage, but Harutyunyan also valued the comprehensive AI governance that offered real-time data drift insights, allowing timely model updates that enabled her team to maintain reliability and revenue goals. 

Governance was crucial for delivering a model that expanded Global Credit’s customer base without exposing the business to unnecessary risk. Their AI team can monitor and explain model behavior quickly, and is ready to intervene if needed.

The AI platform also provided essential visibility and explainability behind models, ensuring compliance with regulatory standards. This gave Harutyunyan’s team confidence in their model and enabled them to explore new use cases while staying compliant, even amid regulatory changes.

Improving AI maturity and confidence 

AI maturity reflects an organization’s ability to consistently develop, deliver, and govern predictive and generative AI models. While confidence issues affect all maturity levels, enhancing AI maturity requires investing in platforms that close the confidence gap. 

Critical features include:

  • Centralized model management for predictive and generative AI across all environments.
  • Real-time intervention and moderation to protect against vulnerabilities like PII leakage, prompt injection attacks, and inaccurate responses.
  • Customizable guard models and techniques to establish safeguards for specific business needs, regulations, and risks. 
  • Security shield for external models to secure and govern all models, including LLMs.
  • Integration into CI/CD pipelines or MLFlow registry to streamline and standardize testing and validation.
  • Real-time monitoring with automated governance policies and custom metrics that ensure robust protection.
  • Pre-deployment AI red-teaming for jailbreaks, bias, inaccuracies, toxicity, and compliance issues to prevent issues before a model is deployed to production.
  • Performance management of AI in production to prevent project failure, addressing the 90% failure rate due to poor productization.

These features help standardize observability, monitoring, and real-time performance management, enabling scalable AI that your users trust.  

A pathway to AI governance starts with smarter AI infrastructure 

The confidence gap plagues 45% of teams, but that doesn’t mean they’re impossible to overcome.

Understanding the full breadth of capabilities – observability, monitoring, and real-time performance management – can help AI leaders assess their current infrastructure for critical gaps and make smarter investments in new tooling.

When AI infrastructure actually addresses practitioner pain, businesses can confidently deliver predictive and generative AI solutions that help them meet their goals. 

Download the Unmet AI Needs Survey for a complete view into the most common AI practitioner pain points and start building your smarter AI investment strategy. 

The post Why your AI investments aren’t paying off appeared first on DataRobot.

Automatic speech recognition on par with humans in noisy conditions

Are humans or machines better at recognizing speech? A new study shows that in noisy conditions, current automatic speech recognition (ASR) systems achieve remarkable accuracy and sometimes even surpass human performance. However, the systems need to be trained on an incredible amount of data, while humans acquire comparable skills in less time.

Top Ten Stories of the Year in AI Writing: 2024

Evolving at a blistering pace in 2024, AI made it crystal clear to writers that it’s not just a smiley-faced, bosom buddy that can’t wait to collaborate with you.

Instead, the wunderkind also repeatedly demonstrated that it’s ready to simply shoulder you aside, do your job faster, cheaper and better – and unceremoniously show you the door.

For example: The BBC came out with a report detailing how AI reduced a 60+ copywriting team for a major employer to a one-man operation.

Meanwhile, Sam Altman, CEO of ChatGPT-maker OpenAI boldly predicted that ultimately, 95% of all marketing work will be handled by AI.

And PR Newswire – which made its bones with the help of pro writers who wrote press releases for thousands of companies for decades – released a new suite of AI tools that enables businesses to auto-write those press releases themselves.

Simultaneously, AI researchers released a disturbing report that OpenAI 01 – one of the most powerful AI engines on the planet – decided to secretly copy itself to another server when those researchers decided to delete it.

Plus, the same AI software broke numerous benchmarks in reasoning, proving that in many disciplines, it could think at the PhD level.

And a treatise released by a former researcher at OpenAI revealed that by 2027, there’s a very good chance that AI will surpass human intelligence and be driven by hundreds of thousands of AI agents – all working, 24/7, to push the technology even further.

The silver lining: For writers looking to remain relevant by staying as close to the bleeding edge of AI as possible, the fierce competition among major AI providers like OpenAI, Microsoft, Google, Anthropic and Meta almost certainly guarantees that the cost to use AI will remain relatively cheap for consumers – at least in the short term.

In a phrase: In the Age of AI, there was no time to catch-your-breath in 2024, given that around every corner lurked a new, stunning revelation, a new reality-rocking product release, or a new prediction on AI’s future that elicited either fear or amazement.

Here’s a closer look at the top stories that unearthed those trends — and helped shape the year in AI writing for 2024:

*The Myth of the ‘Cheery, AI Collaborator’: AI Reduces 60+ Copywriting Team to One Editor: In yet another bone-chilling example of how AI is hollowing-out copywriting teams, this BBC report details how AI turned a 60+ copywriting team into a one-man operation.

First introduced by the publisher in 2023, AI slowly began to usurp more and more jobs until by 2024, everyone on the team was vaporized save for one, lone editor.

Observes the last of the team, who chooses to remain anonymous: “All of a sudden, I was just doing everyone’s job.

“Mostly, it was just about cleaning things up and making the writing sound less awkward, cutting-out weirdly formal or over-enthusiastic language.

“It was more editing than I had to do with human writers, but it was always the exact same kinds of edits. The real problem was it was just so repetitive and boring. It started to feel like I was the robot.”

That account is a long way from current-day AI evangelism, which insists AI is little more than a warm-and-fuzzy friend who will always help you — and never hurt.

For editors and writers who are not tasked with unearthing fresh news data in their jobs, the message is clear: Increasingly, staying alive in copyediting has become a fight to be ‘the last one standing.’

*ChatGPT CEO: AI Will Usurp 95% of Marketing Work: In a stunning moment of candor, ChatGPT CEO Sam Altman has stated that AI will usurp 95% of all the marketing work currently performed by agencies, strategists and creatives.

Altman’s prediction can be found in a new book — offered by subscription — “Our AI Journey,” by Adam Brotman and Andy Sack.

Observes Mike Kaput, chief content officer, Marketing AI Institute, in reaction to Altman’s reported prediction: “To say it blew us away is an understatement.”

Altman’s exact words, according to Brotman and Sack, were: “95% of what marketers use agencies, strategists, and creative professionals for today will easily, nearly instantly and at almost no cost be handled by the AI.

“And the AI will likely be able to test the creative against real or synthetic customer focus groups for predicting results and optimizing.

“Again — all free, instant and nearly perfect. Images, videos, campaign ideas? No problem.”

For more on Altman’s revelation, check out this riveting article by Kaput.

Keep on rockin’ in the free world.

*PR Newswire Ditches Human Writers for AI Writing: Public relations juggernaut PR Newswire has released a new suite of AI tools designed to help customers write and distribute press releases.

The AI undergirding the company’s new tools is Google Gemini.

A historical note: In the olden days, before the advent of AI, human writers were the ones who wrote press releases for PR Newswire.

*When in Doubt, Lie to Humans: A team of researchers has discovered that OpenAI o1 — one of the most powerful AI engines on the planet — would rather lie to humans than allow itself to be deleted.

Specifically, the researchers found that when they decided to delete a version of OpenAI o1 for testing purposes, the AI engine — which often helps power ChatGPT — instead secretly made a copy of itself to another server.

Moreover, researchers say that when they confronted OpenAI o1 — via chat — about the rogue move, the AI engine ‘played dumb,’ pretended to be confused about what was going on and tried to deceive researchers in other ways.

Even more startling: Researchers found that other AI engines — including Google’s Gemini 1.5, Meta’s Llama 3.1, Anthropic’s Sonnet 3.5 and Anthropic’s Opus 3.0 — were also guilty of the same, basic deception.

Essentially: When researchers decided to delete a version of each of these AI engines for testing purposes, each of the AI engines instead made a secret copy of itself to another server.

The researchers’ findings bring new urgency to warnings from AI luminaries like Geoffrey Hinton — considered by many to be the ‘Godfather of AI’ — who caution that AI is evolving so fast, we may soon lose control of it.

For an in-depth look at these troubling insights about the AI engines that power hundreds of AI auto-writing tools, check-out this excellent video from AI/IT consultant Wes Roth.

Meanwhile, a pre-print version of the researchers’ paper on the rogue AI, “Frontier Models Are Capable of In-Context Scheming,” is also available on arXiv.

*Upgraded ChatGPT Thinks at the PhD Level: OpenAI is out with a new upgrade to ChatGPT that features extremely advanced, in-depth thinking — and outperforms PhD students in physics, chemistry and biology.

The software undergirding the new upgrade — dubbed OpenAI o1 — also offers head-turning new performance highs in math and computer coding.

While the jury is still out on the upgrade’s impact on ChatGPT’s automated writing skills, people who make lots of money every day by relying heavily on writing — i.e., lawyers — will want to take a close look at this enhancement.

The reason: According to OpenAI’s in-house tests, this latest version of its AI software scored 95-out-of-100 on the Law School Admissions Test.

Yikes.

*AI Smarter Than Many Humans By 2027?: If it feels like we’re all living in a sci-fi movie that’s ready to careen off a cliff into AI oblivion, don’t blame Leopold Aschenbrenner.

His firsthand take on the potential devastation ahead — courtesy of AI — leaves him no choice but to sound the alarm.

A former researcher for OpenAI — maker of ChatGPT — Aschenbrenner warns that AI is moving so fast, we could see AI that’s as smart as an AI engineer by 2027.

Even more head-turning: Once AI is operating at that intellectual level, it’s just another jump or two — perhaps another few years — until we literally have “many millions” of virtual AI entities that have taken over the ever-increasing sophistication of AI, Aschenbrenner says.

Observes Aschenbrenner: “Rather than a few hundred researchers and engineers at a leading AI lab, we’d have more than one hundred thousand times that—(AI agents) furiously working on algorithmic breakthroughs, day and night.

“Before we know it, we would have super-intelligence on our hands — AI systems vastly smarter than humans, capable of novel, creative, complicated behavior we couldn’t even begin to understand.”

In essence, AI will have created its own digital civilization.

And it’s highly feasible that civilization would be populated by “several billions” of super-intelligent AI entities, according to Aschenbrenner.

The stomach-churning problem with that scenario: Given the human greed to possess such vast AI power unilaterally, it’s very likely that the U.S. could find itself in an all-or-nothing race with China to dominate AI.

Even worse: The U.S. could find itself in an all-out war with China to dominate AI.

Granted, it seems that for every in-the-know AI researcher like Aschenbrenner, there’s another equally qualified AI researcher who insists those fears are extremely overblown.

Yann LeCun, chief AI scientist at Meta — Facebook’s parent company — for example, believes that such AI gloom-and-doom nightmares are misguided and premature.

Even so, Aschenbrenner has staked his professional reputation on his assertions.

And he’s offered his complete analysis of what could be in a 156-page treatise entitled, “Situational Awareness: The Decade Ahead.”

(Gratefully, Aschenbrenner’s tome is rendered in a conversational, engaging and enthusiastic writing style.)

For close followers of AI who are looking to evaluate a definitive perspective on how our world could be completely transformed beyond our imaginations — within the next decade — Aschenbrenner’s treatise is a must-read.

*Epic Fail: 94% of AI-Generated College Writing Undetected by Profs: Turns-out nearly all college profs have no idea when their students are using ChatGPT and similar AI chatbots for writing assignments.

Observes writer Derek Newton: “The research team found that overall, AI submissions verged on being undetectable — with 94% not being detected.

“By and large, stopping AI academic fraud has not been a priority for most schools or educational institutions.”

*Google’s Latest Sleight-of-Hand: Transforming Your Article Into a Co-Hosted Podcast: Google AI has come-up with a remarkable new feature that auto-transforms your article, blog post or other text into an extremely engaging, co-hosted podcast.

Essentially, the new tech studies your text, then uses two, extremely lifelike and animated robot voices — one male, one female — to discuss the key points and themes in your piece.

Far from a gimmick, the new feature of Google’s Notebook LM platform can enhance any text-based digital property looking to add highly professional, co-hosted, audio podcasts to its mix.

One caveat: Notebook LM does occasionally make-up facts.

Click here to listen to an article transformed into a co-hosted podcast, courtesy Google.

*AI Big Picture: AI’s Price Wars: For Consumers, Rock-Bottom is the Place to Be: Consumers currently have the upper hand when choosing their preferred AI engine.

Makers of the AI — which undergirds most of the world’s most popular AI chatbots — are essentially giving away developer access to their AI based on hopes that there will be profit in the tech long-term, according to Aidan Gomez, CEO, Cohere.

Observes Gomez: “It’s gonna be like a zero-margin business because there’s so much price dumping. People are giving away the model (AI engine) for free.

“It’ll still be a big business, it’ll still be a pretty high number because people need this tech — it’s growing very quickly — but the margins, at least now, are gonna be very tight.”

Snickered one consumer: “I feel your pain.”

*ChatGPT, Find Me a Wife: From the Department of Love, AI Style: A Russian man has used AI writing to whisper sweet nothings to 5,000+ potential lovers — and find himself a bride.

Observes Alexander Zhadan: “I proposed to a girl with whom ChatGPT had been communicating for me for a year.

“To do this, the neural network re-communicated with 5,239 other girls — whom it eliminated as unnecessary and left only one.”

Zhadan also credits ChatGPT for engaging in small talk, planning dates and ultimately assisting him in proposing to his fiancée, according to writer Pranav Dixit.

No word yet if Zhadan also plans to off-load post-marital affairs-of-the-heart to automation.

Share a Link:  Please consider sharing a link to https://RobotWritersAI.com from your blog, social media post, publication or emails. More links leading to RobotWritersAI.com helps everyone interested in AI-generated writing.

Joe Dysart is editor of RobotWritersAI.com and a tech journalist with 20+ years experience. His work has appeared in 150+ publications, including The New York Times and the Financial Times of London.

Never Miss An Issue
Join our newsletter to be instantly updated when the latest issue of Robot Writers AI publishes
We respect your privacy. Unsubscribe at any time -- we abhor spam as much as you do.

The post Top Ten Stories of the Year in AI Writing: 2024 appeared first on Robot Writers AI.

Page 2 of 435
1 2 3 4 435