GenAIPro https://www.webpronews.com/emergingtech/genaipro/ Breaking News in Tech, Search, Social, & Business Tue, 15 Oct 2024 11:12:21 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://i0.wp.com/www.webpronews.com/wp-content/uploads/2020/03/cropped-wpn_siteidentity-7.png?fit=32%2C32&ssl=1 GenAIPro https://www.webpronews.com/emergingtech/genaipro/ 32 32 138578674 New York Times Escalates Legal Fight Against AI, Demands Perplexity Stop Using Its Content https://www.webpronews.com/new-york-times-escalates-legal-fight-against-ai-demands-perplexity-stop-using-its-content/ Tue, 15 Oct 2024 11:05:29 +0000 https://www.webpronews.com/?p=609386 The battle over content usage in the era of generative AI continues, with the New York Times taking direct aim at the AI-powered search startup, Perplexity. On Tuesday, per to a report in the Wall Street Journal, the Times issued a cease-and-desist notice, demanding that the Bezos-backed company stop accessing and utilizing its content for AI-generated summaries. According to the letter, reviewed by The Wall Street Journal, Perplexity has allegedly violated the newspaper’s rights under copyright law.

Perplexity, which launched two years ago, has positioned itself as an emerging challenger to search giants like Google, offering users AI-generated summaries with selected sources and links. Despite the demand from the New York Times, Perplexity CEO Aravind Srinivas stated, “We are very much interested in working with every single publisher, including the New York Times. We have no interest in being anyone’s antagonist here.”

Tune in for the New York Times vs. Perplexity AI clash!

 

The Stakes for Publishers

The clash between Perplexity and the Times is not an isolated incident. Generative AI technologies are reshaping the landscape for media and content-driven industries, prompting publishers to recalibrate their strategies in the face of rapid advancements. News outlets, long reliant on advertising and subscription revenue, see both promise and peril in AI. The technology’s ability to analyze data and create content at scale offers efficiency, but it also introduces new risks of misuse and content theft.

The Times has been proactive in protecting its content, and this isn’t the first time it has taken legal action to curb AI firms from exploiting its journalism. The publisher has also filed a lawsuit against OpenAI, the creator of ChatGPT, for alleged copyright infringement. “Perplexity and its business partners have been unjustly enriched by using, without authorization, The Times’s expressive, carefully written and researched, and edited journalism without a license,” the Times wrote in its notice to Perplexity.

The Current Lawsuit Against OpenAI

The New York Times’ legal action against OpenAI further highlights the intensifying struggle between publishers and AI companies over content rights. The lawsuit, filed late last year, accuses OpenAI of using millions of the Times’ articles without permission to train its language models, including ChatGPT. The Times claims that OpenAI’s actions constitute copyright infringement, as its chatbot generates summaries and responses based on the expressive content of the Times’ journalism.

OpenAI, for its part, has denied any wrongdoing, arguing that the data used in training ChatGPT falls under fair use, a defense often invoked by AI companies. The company also contends that some of the Times’ tests in support of the lawsuit were specifically designed to provoke outputs resembling original articles, which OpenAI claims were not representative of typical chatbot responses.

The implications of this lawsuit extend beyond just OpenAI and the Times. If successful, it could set a precedent for how AI companies can legally access and use publisher content, shaping the future of generative AI models and the scope of fair use. This legal battle underscores the broader concerns that media companies have about AI scraping and summarizing their work without appropriate compensation or licensing agreements.

The lawsuit against OpenAI shares many parallels with the current demands made to Perplexity, as both companies have been accused of unauthorized use of copyrighted content. As the generative AI landscape evolves, the outcomes of these legal actions could have significant ramifications for the boundaries between content ownership and technological innovation.

Perplexity’s Response

Perplexity has reportedly assured the Times in the past that it would stop using crawling technology that circumvents website restrictions, but the Times asserts that the company’s assurances have not been honored. The Times asked Perplexity to provide detailed information on how it has been accessing the publisher’s website despite the Times’s preventative measures.

In response, Srinivas emphasized that Perplexity “isn’t ignoring the Times’s efforts to block crawling of its site.” He added that the company plans to address the issues raised in the legal notice by the October 30 deadline. Perplexity has previously struck a handful of deals with publishers, though media companies have described the startup’s terms as less favorable compared to the lucrative licensing agreements that others, like OpenAI, have offered.

Perplexity’s Challenge to Google

Perplexity is backed by Jeff Bezos, and while the company is still a small player compared to Google, it has ambitious plans. In September, Perplexity reported processing 340 million searches, a tiny fraction of Google’s volume but still indicative of growing interest. Perplexity plans to introduce ads under its AI-generated responses later this month, with the company pledging to share up to 25% of the ad revenue with publishing partners whose content it utilizes.

The use of AI-generated search summaries is becoming an increasingly sensitive issue, as traditional publishers worry that users who find information from AI summaries may no longer click through to the full articles. Perplexity is sending some traffic to publishers’ sites, but the volume is still relatively small. According to data from digital measurement firm Similarweb, referrals from Perplexity to the Times’s website increased eightfold over the year ending in August 2024, but they remain a fraction of the traffic driven by Google.

Broader Concerns Across Media

The New York Times is not alone in raising concerns about Perplexity’s practices. Other major media companies, including Forbes and Condé Nast, have accused Perplexity of using their content without permission. Forbes alleged that Perplexity used its content to create stories “extremely similar” to the original reporting. “Any unauthorized use of Forbes’ Intellectual Property is a violation of Forbes’ intellectual property rights, depriving Forbes of those rights and threatening its reputation and goodwill,” Forbes wrote in a notice to Perplexity.

These grievances are part of a larger conversation within the media industry regarding the balance between AI innovation and intellectual property protection. Some publishers have opted to sign licensing deals with AI companies—OpenAI has agreements with media organizations such as News Corp (the parent of The Wall Street Journal), Dotdash Meredith, and Politico owner Axel Springer—that compensate them for the use of their content.

The Complex Dynamics of AI Content Usage

The Times and other publishers have long taken steps to block AI firms from scraping their content without permission. One of the key measures used is the inclusion of specific code in websites that indicates their content should not be scraped, but enforcement remains a challenge. As Perplexity and similar startups continue to gain traction, media companies face the ongoing task of safeguarding their content.

While Perplexity is attempting to carve out its own niche in the competitive search market, the startup is walking a fine line. Its current valuation stands at approximately $1 billion, following a new funding deal earlier this year. Most of its revenue currently comes from a subscription offering priced at $20 per month, which provides users access to more advanced AI capabilities. However, monetizing its AI-generated search through ads—and sharing that revenue with publishers—is a crucial part of its strategy going forward.

A Legal Landscape in Flux

The ongoing disputes between Perplexity, the New York Times, and other publishers highlight the unsettled nature of the legal framework surrounding generative AI. While Perplexity has positioned itself as willing to collaborate with publishers, the path to mutually beneficial agreements is far from straightforward. As Srinivas put it, “We are not interested in being anyone’s antagonist here.” Nevertheless, the tensions around content scraping and copyright issues suggest that the broader fight over content usage by AI is only beginning.

Publishers are finding themselves in a challenging position—embracing technological advancements while safeguarding their core assets. As more media companies weigh legal actions, partnerships, or licensing deals, the industry is grappling with how best to coexist with generative AI firms in a way that preserves both innovation and the value of journalistic content.

The next few months may prove pivotal as Perplexity responds to the Times’s cease-and-desist notice and as other publishers decide whether to follow a similar course. The questions raised by the use of AI in news search—including how to protect original content and fairly compensate creators—remain unresolved, and how these issues play out could define the future relationship between media and artificial intelligence.

]]>
609386
OpenAI Canvas Is a New Way to Write and Code https://www.webpronews.com/openai-canvas-is-a-new-way-to-write-and-code/ Fri, 04 Oct 2024 12:00:00 +0000 https://www.webpronews.com/?p=609219 OpenAI unveiled Canvas, the company’s ” new interface for working with ChatGPT on writing and coding projects that go beyond simple chat.”

OpenAI has been released a slew of ChatGPT products and improved models, but its latest release is aimed specifically at writers and coders. Although writers and coders are already using ChatGPT, Canvas improves on the experience in important ways.

Catch our chat on OpenAI’s game-changing new coding tool, Canvas!

 

People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it’s limited when you want to work on projects that require editing and revisions. Canvas offers a new interface for this kind of work.

With canvas, ChatGPT can better understand the context of what you’re trying to accomplish. You can highlight specific sections to indicate exactly what you want ChatGPT to focus on. Like a copy editor or code reviewer, it can give inline feedback and suggestions with the entire project in mind.

You control the project in canvas. You can directly edit text or code. There’s a menu of shortcuts for you to ask ChatGPT to adjust writing length, debug your code, and quickly perform other useful actions. You can also restore previous versions of your work by using the back button in canvas.

Interestingly, Canvas is designed to open automatically when writing or coding is detected.

Canvas opens automatically when ChatGPT detects a scenario in which it could be helpful. You can also include “use canvas” in your prompt to open canvas and use it to work on an existing project.

OpenAI says Canvas shows significant improvement over baseline GPT-4o in applicable tasks.

OpenAI Canvas Results - Credit OpenAI
OpenAI Canvas Results – Credit OpenAI

We measured progress with over 20 automated internal evaluations. We used novel synthetic data generation techniques, such as distilling outputs from OpenAI o1-preview, to post-train the model for its core behaviors. This approach allowed us to rapidly address writing quality and new user interactions, all without relying on human-generated data.

A key challenge was defining when to trigger a canvas. We taught the model to open a canvas for prompts like “Write a blog post about the history of coffee beans” while avoiding over-triggering for general Q&A tasks like “Help me cook a new recipe for dinner.” For writing tasks, we prioritized improving “correct triggers” (at the expense of “correct non-triggers”), reaching 83% compared to a baseline zero-shot GPT-4o with prompted instructions.

For writing and coding tasks, we improved correctly triggering the canvas decision boundary, reaching 83% and 94% respectively compared to a baseline zero-shot GPT-4o with prompted instructions.

Open AI has been transforming itself into a for-profit company, instead of a nonprofit organization. A large part of that is demonstrating use cases for which users are willing to pay for its AI products. Canvas is a big step in that direction.

]]>
609219
Google-Commissioned Report Says AI Can Significantly Boost EU’s Economy https://www.webpronews.com/google-commissioned-report-says-ai-can-significantly-boost-eus-economy/ Tue, 01 Oct 2024 19:04:51 +0000 https://www.webpronews.com/?p=609111 A new report by Implement Consulting Group, commissioned by Google, says that AI deployment has the potential to spur significant economic growth in the EU.

AI continues to be a controversial topic, with companies, organizations, and governments trying to grasp its full impact on a range of issues, including employment, privacy, cybersecurity, ownership, and more. Many critics fear the tech will lead to mass layoffs as AI models take over jobs.

Catch our chat on how a Google report says AI could supercharge the EU economy!

 

According to Implement Consulting Group’s report, not only is AI unlikely to cause an apocalypse in the employment industry, but it is poised to do the exact opposite. The report included the following findings:

Economic opportunity: Generative AI could boost the EU’s GDP by EUR 1.2-1.4 trillion, amounting to +8% GDP over ten years if widespread adoption is achieved.

The gains come from three sources, including productivity increases from people working with generative AI, freed-up time from generative AI’s automation potential and the re-employment of time for other value-creating activities.

Job implications: In the EU, 61% of jobs are expected to work together with generative AI, 32% of jobs are likely to remain unaffected by generative AI, and only 7% of jobs are deemed highly exposed to generative AI, leading to some job closures. However, new jobs in the AI-powered economy are expected to replace those lost due to automation, resulting in unchanged employment levels.

Key sectors benefitting: Generative AI can boost productivity across sectors by augmenting and improving human capabilities. In contrast to past automation, such as robots, generative AI can boost productivity in services, where 80% of its economic potential lies.

AI readiness: The EU performs well on the early foundational drivers of AI adoption that ensure a safe and reliable AI-ready environment but lags behind globally on AI innovation drivers (talent, research, development and commercialisation). Present gaps indicate that the EU risks falling behind the next wave of AI and needs to ramp up its efforts to remain competitive.

The job implications are particularly telling, with AI only endangering 7% of jobs. When compared to the EUR 1.2-1.4 trillion boost to the overall GDP, however, the case can be made that those in the 7% will likely be able to find work in other roles.

Google’s AI Opportunity Agenda

In the wake of the report, Google’s Matt Brittin, President, Google Europe, Middle East, and Africa, unveiled the company’s AI Opportunity Agenda, a series of recommendations for governments looking to benefit from AI’s transformative impact.

The AI Opportunity Agenda includes the following points:

Investing in research and development

For the EU to truly compete in AI, it needs to make research and development a shared priority, as well as making funding more accessible. Without the right incentives to develop and commercialise AI innovation, Europe is stifling its talent and its chances of launching more home-grown tech unicorns.

Building infrastructure to support innovation

AI breakthroughs are only possible with the right high-performance computing technologies and data centres — and the renewable energy to support them. To enable AI innovation at scale, the EU will need to allocate more funding to financing such infrastructure — as well as incentivising and enabling the private sector to do the same.

Improving skills and training programmes

Technological growth will not be effective if people are left behind. Given its diversity, the EU must make sure technology benefits every business, economy and person. To do this, it needs to accelerate digital skills transformation, putting AI skills and education at the centre of a revitalised European Skills Agenda — and adding it to school curriculums.

Promoting widespread adoption

We ultimately need to ensure that AI is applied and deployed in a universally accessible and useful way. For the private sector, EU policymakers and AI developers must work together to develop outreach strategies to traditional industries and small businesses who have much to gain from AI adoption. For the public sector, member states must double down on existing initiatives to increase the public procurement of AI and developing bolder AI adoption targets.

Google clearly believes AI can and will be a positive force for good. If the study it commissioned is correct, governments inside and outside of Europe have a major opportunity in front of them.

]]>
609111
OpenAI Set to More Than Double the Price of ChatGPT Plus https://www.webpronews.com/openai-set-to-more-than-double-the-price-of-chatgpt-plus/ Mon, 30 Sep 2024 11:40:47 +0000 https://www.webpronews.com/?p=609052 OpenAI is preparing to start raising the price of ChatGPT Plus, with yearly increases reportedly bringing the price to $44 per month in five years.

OpenAI is the world’s leading AI firm, but the company is spending money at an extraordinary rate. As the company transitions from a nonprofit to a for-profit company, the pressure is on to become profitable and justify the billions of dollars being spent to develop AI models.

Catch our chat on OpenAI hiking the ChatGPT Plus price!

 

OpenAI is in the midst of a new round of funding, with the goal of raising “several billion dollars.” According to The New York Times, as part of its fundraising, OpenAI is sharing documents that detail its plans to achieve profitability.

Key to those plans is charging its 10 million paid ChatGPT users more per month. A ChatGPT Plus subscription currently costs $20 per month, but OpenAI plans to raise that by $2 by the end of 2024. Over the next five years, the company will continue to raise the subscription’s price until it reaches the target $44 per month.

While expensive, the price is certainly much less than the $2,000 per month ideas that were being floated within the company in early September.

AI Still Has a Value Proposition Problem

Ultimately, OpenAI’s plans to raise prices illustrates the problem companies are still trying to address: demonstrating that AI is enough of a game-changer to be worth paying what’s required to offset its cost.

As AI firms have burned through billions of dollars developing their models, the financial sector has begun to cool on the idea of continuing to invest at the same breakneck pace. In particular, AI’s relatively limited use cases—especially compared to the hype—are dampening investor enthusiasm.

“The expectations and hype around GenAI are enormously high,” Gartner analyst Arun Chandrasekaran said in August. “So it’s not that the technology, per se, is bad, but it’s unable to keep up with the high expectations that I think enterprises have because of the enormous hype that’s been created in the market in the last 12 to 18 months.”

“To be sure, Generative AI itself won’t disappear,” Chandrasekaran explained. “But investors may well stop forking out money at the rates they have, enthusiasm may diminish, and a lot of people may lose their shirts. Companies that are currently valued at billions of dollars may be sold, or stripped for parts.”

If OpenAI does raise prices at the rate its internal documents suggest, it could prove to be a litmus test for just how much people are willing to pay for something that still has limited usefulness.

]]>
609052
Microsoft Releases ‘Correction’ Tool to Address AI Hallucinations https://www.webpronews.com/microsoft-releases-correction-tool-to-address-ai-hallucinations/ Wed, 25 Sep 2024 14:29:43 +0000 https://www.webpronews.com/?p=608908 Microsoft has released a new tool, called “Correction,” aimed at addressing one of the biggest issues challenging facing the AI industry

All AI models hallucinate, or manufacture details in response to queries. It’s unclear why the phenomenon occurs, but all AI firms are working on ways to address the problem. Microsoft’s solution is Correction, a tool that uses “Groundless Detection” to check and correct AI-generated content.

Don’t miss our chat on Microsoft’s new tool to tackle AI hallucinations!

 

As Microsoft describes, groundless detection uses provided source documents to cross-check AI responses for accuracy.

This feature automatically detects and corrects ungrounded text based on the provided source documents, ensuring that the generated content is aligned with factual or intended references. Below, we explore several common scenarios to help you understand how and when to apply these features to achieve the best outcomes.

Groundless Detection is available both with reasoning and without. For example, without reasoning, groundless detection uses a simple true or false mechanism.

In the simple case without the reasoning feature, the Groundedness Detection API classifies the ungroundedness of the submitted content as true or false.

In contrast, using the Groundless Detection feature with reasoning enabled does a better job correcting the hallucinated content to align with the provided sources.

The Groundedness Detection API includes a correction feature that automatically corrects any detected ungroundedness in the text based on the provided grounding sources. When the correction feature is enabled, the response includes a “correction Text” field that presents the corrected text aligned with the grounding sources.

Microsoft says its new Correction feature builds on groundless detection, which was first introduced in March 2024, giving customers far more control.

Since we introduced Groundedness Detection in March of this year, our customers have asked us: “What else can we do with this information once it’s detected besides blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where traditional content filters often fall short in addressing the unique risks posed by Generative AI hallucinations.

This is why we are introducing the correction capability. Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise.

Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them.

The company goes on to describe how the feature works, step-by-step.

  • The developer of the application needs to enable the correction capability.
  • Then, when an ungrounded sentence is detected, this triggers a new request to the generative AI model for a correction.
  • The LLM then assesses the ungrounded sentence against the grounding document.
  • If the sentence lacks any content related to the grounding document, it may be filtered out completely.
  • However, if there is content sourced from the grounding document, the foundation model will rewrite the ungrounded sentence to help ensure it aligns with the grounding document.

The Hallucination Problem

It remains to be seen if Groundless Detection will completely solve the issue of AI hallucinations, but it appears to be a step in the right direction, at least until AI firms can better better understand why they happen. Unfortunately, that has proved to be a difficult task, as Alphabet CEO Sundar Pichai pointed out.

“No one in the field has yet solved the hallucination problems,” Pichai said. “All models do have this as an issue.”

“There is an aspect of this which we call—all of us in the field—call it a ‘black box,’” he added. “And you can’t quite tell why it said this, or why it got it wrong.”

Even Apple CEO Tim Cook has acknowledged the problem, saying he would never claim the company’s AI models are free of the issue.

“It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in,” Cook replied to Washington Post columnist Josh Tyrangiel. “So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”

https://youtu.be/odxAPb0uf34?feature=shared
]]>
608908
Salesforce CEO Marc Benioff: ‘Customers So Disappointed In Microsoft Copilot’ https://www.webpronews.com/salesforce-ceo-marc-benioff-customers-so-disappointed-in-microsoft-copilot/ Mon, 23 Sep 2024 18:15:49 +0000 https://www.webpronews.com/?p=608790 Salesforce CEO Marc Benioff minced no words when comparing his company’s AI agents to Microsoft’s Copilot, saying customers are disappointed with the latter.

Microsoft has invested billions in OpenAI, using its ChatGPT models as the basis for its Copilot AI. The company has integrated Copilot across its entire range of products, evening unveiling a line of PCs purpose-built to utilize the AI assistant.

Hear Salesforce’s Marc Benioff call out customer frustration with Microsoft Copilot!

 

Despite Microsoft’s investment, Benioff says Copilot has been a major disappointment to most customers. Benioff made the comments during the company’s most recent earnings call (courtesy of The Motley Fool), in which he compared Copilot to the Agentforce AI agents Salesforce recently unveiled.

But we’re seeing that breakthrough occur because, with our new Agentforce platform, we’re going to make a quantum leap for in AI, and that’s why it wants you all at Dreamforce because I want you to have your hands on this technology to really understand this. This is not copilots. So, many customers are so disappointed in what they bought from Microsoft Copilots because they’re not getting the accuracy and the response that they want. Microsoft has disappointed so many customers with AI.

Benioff made special mention of customers’ expectation, when it comes to training models, calling out Microsoft for convincing customers to accept a DIY experience, instead of providing the fully integrated platform Salesforce provides.

The last point is this, these customers they’re still going to build models, but it’s in our platform.

They’re still going to fine-tune those models in our platform. They’re going to still use our AI studios and build their own prompts in our platform there — all of it runs in our platform, and that’s how they deliver this incredible capability. And if you’ve seen some of the architecture and graphics and how I’ve changed the architecture of the — and how we talk about the company, it’s really about in this one, two, three approach, the apps, the data and the agents, but it’s all AI-centric. It’s all — and it’s not — you’re not going to have to DIY.

And it just — it is driving me a little crazy, as you probably heard, like when I meet with these customers and they think I need to build my own model, I have to train my own model after retrain, and they are spending a lot of money on this craziness and it’s not working. So, it is a disappointment that Microsoft has convinced so many customers and others to move some of these model companies who are just regulated at this point to be in commodities that they have to be customized for stuff. It’s not true. It can be done in a platform approach and it’s better, easier, lower cost, and we’re — this technology will — we are proving it, and we’ll show it.

Benioff’s statements are an interesting indictment of what is one of the industry’s leading AI platforms, although it seems even Microsoft may be aware that Copilot doesn’t always provide the accuracy customers want. In fact, the company even warns customers not to rely on Copilot for professional advice.

Assistive AI. AI services are not designed, intended, or to be used as substitutes for professional advice.

In contrast, as Benioff points out, Salesforce designed Agentforce agents to act with level of autonomy that appears to be lacking in Copilot.

“Agentforce represents the Third Wave of AI—advancing beyond copilots to a new era of highly accurate, low-hallucination intelligent agents that actively drive customer success. Unlike other platforms, Agentforce is a revolutionary and trusted solution that seamlessly integrates AI across every workflow, embedding itself deeply into the heart of the customer journey. This means anticipating needs, strengthening relationships, driving growth, and taking proactive action at every touchpoint,” Benioff said when unveiling Agentforce. “While others require you to DIY your AI, Agentforce offers a fully tailored, enterprise-ready platform designed for immediate impact and scalability. With advanced security features, compliance with industry standards, and unmatched flexibility. Our vision is bold: to empower one billion agents with Agentforce by the end of 2025. This is what AI is meant to be.”

Salesforce has called Agentforce the “third wave of AI.” Only time will tell if it truly delivers on the promise, but it appears to be off to a strong start.

]]>
608790
Do You Think Apple’s Behind in AI? Well, You’re Wrong! https://www.webpronews.com/do-you-think-apples-behind-in-ai-well-youre-wrong/ Sun, 22 Sep 2024 23:07:34 +0000 https://www.webpronews.com/?p=608724 Is Apple really behind in AI? Listen in as we break down why that’s not the case!

 

For years, the tech community has speculated that Apple might be lagging behind in the artificial intelligence (AI) race. As companies like OpenAI, Google, and Microsoft release groundbreaking advancements in AI, it seems like Apple has stayed relatively quiet. But the reality is that Apple’s approach to AI is far more nuanced, strategic, and embedded within its ecosystem than many realize. Despite perceptions, Apple may not be trailing at all—in fact, its AI strategy could prove to be one of the most significant.

The Myth of Being “Behind”

The common refrain is that Apple hasn’t yet launched a standalone AI product akin to OpenAI’s ChatGPT or Google’s Gemini, which leads some to believe it’s playing catch-up. This perception, while understandable given the constant stream of news about competitors’ AI achievements, doesn’t tell the whole story. Apple’s AI focus is more about integrated, behind-the-scenes capabilities rather than high-profile AI models.

One podcast host from All Future summarized this misunderstanding: “Apple might not have released a public AI chatbot like ChatGPT, but they are far from behind. Apple’s AI capabilities are more deeply embedded into their devices and services than most realize.”

Apple’s strength in AI lies in how it enhances the user experience without making a grand announcement. While Siri, Apple’s AI assistant, may not be as lauded as Google Assistant or Alexa, the company has continued to refine its underlying AI architecture, focusing on privacy and efficiency.

A Strategic Partnership with OpenAI

What many may not know is that Apple is a strategic partner with OpenAI, the very company leading AI conversations with its ChatGPT models. This partnership, largely under the radar, is already reshaping Apple’s future in AI. With the latest version of OpenAI’s ChatGPT, which has been described as “smarter than any other AI we’ve seen before,” Apple is expected to integrate this powerful technology into its ecosystem.

Matt, a tech analyst on All Future’s podcast, highlighted the importance of this partnership: “Apple’s choice to partner with OpenAI shows they’re not behind—they’re choosing to work with the best.” He added that the latest version of ChatGPT, specifically optimized for more complex tasks such as science, reasoning, and coding, will significantly elevate Apple’s AI capabilities when integrated into devices like the iPhone.

Unlike companies that are chasing headline-worthy AI announcements, Apple is playing a long game. Rather than rushing to release standalone AI apps, Apple is focused on embedding these advancements into its hardware and services in ways that enhance user experience without compromising its values around privacy and security.

Balancing Privacy and Power

One of Apple’s key differentiators in the AI race is its commitment to privacy. While companies like Google and Meta have built AI models using vast amounts of user data, Apple has always emphasized user control and on-device processing. As All Future noted, “Apple isn’t just about being first—they’re about getting it right.”

This emphasis on privacy is crucial in today’s landscape, where data security is increasingly top of mind for users. Apple has been working on integrating AI capabilities in a way that minimizes the amount of personal data sent to the cloud for processing. Siri, for instance, leverages on-device learning, which allows it to become smarter over time without compromising user privacy by storing excessive amounts of data on Apple’s servers.

This approach contrasts sharply with Google’s cloud-based AI models, which process vast amounts of data online. Apple’s method not only safeguards user privacy but also lays the foundation for AI capabilities that seamlessly integrate into everyday experiences without flashy marketing campaigns.

Apple’s Unique Approach to AI Integration

Another area where Apple is innovating is in how AI is integrated across its product ecosystem. Unlike its competitors, Apple doesn’t develop AI in isolation from its hardware and software. As part of its strategic collaboration with OpenAI, Apple is incorporating generative AI models in ways that enhance user interactions, whether through better predictive text, improved image recognition, or more intuitive responses from Siri.

One reason Apple’s approach might be underestimated is that its AI advancements are subtle, often working in the background to improve user experiences. Matt from All Future observed: “While companies like Google and OpenAI focus on grand AI unveilings, Apple is integrating AI in a more seamless and user-friendly way.”

For example, Apple’s Neural Engine, built into its A-series chips, plays a significant role in enhancing AI-driven tasks like photo editing, language processing, and augmented reality (AR) interactions. These tasks are so seamlessly integrated that users may not even realize the advanced AI systems at work behind the scenes.

The Long-Term AI Play

While Apple might not yet have a chatbot that rivals ChatGPT or an AI-powered search engine like Google’s Bard, its AI focus is more embedded in delivering experiences that align with its long-term vision. Apple’s work in machine learning and on-device AI is already apparent in features like iPhone’s Face ID, personalized app suggestions, and contextual app behaviors in iOS.

In the coming years, as generative AI continues to evolve, Apple’s strategic position could enable it to leapfrog competitors in key areas, especially as it deepens its partnership with OpenAI. As one tech commentator put it, “Achieving AGI (Artificial General Intelligence) in self-driving is one of the toughest challenges out there. But with Apple’s resources and partnerships, they’re better positioned than anyone realizes.”

The conversation around AGI highlights how Apple’s slow and steady approach might ultimately pay off in domains such as autonomous driving, healthcare, and even entertainment. The tech giant is also pushing AI boundaries in areas that could reshape industries—whether it’s self-driving cars through Project Titan or innovative healthcare solutions via the Apple Watch.

Apple’s AI Strategy Is No Accident

The belief that Apple is behind in AI reflects a misunderstanding of its strategy. Apple has never been a company to chase trends—it waits until the technology matures, ensures privacy and user experience are safeguarded, and then integrates that tech into its seamless ecosystem. With partnerships like the one with OpenAI and its robust privacy-first approach, Apple is positioning itself not just as a player in the AI race but potentially as a leader.

As Matt from All Future succinctly put it: “Don’t count Apple out. They may not be first to market, but when they release their AI capabilities, they’re going to surprise everyone with how far ahead they actually are.”

With an approach that balances privacy, innovation, and integration, Apple’s AI future looks bright, even if it’s not always visible on the surface. As more AI features roll out in Apple’s devices, it will become clear that the company is not trailing behind—it’s setting the stage for the next wave of AI-driven experiences.

]]>
608724
Google Hosting a Gemini at Work Digital Event https://www.webpronews.com/google-hosting-a-gemini-at-work-digital-event/ Thu, 19 Sep 2024 19:27:26 +0000 https://www.webpronews.com/?p=608547 Listen to our conversation on Google’s Gemini at Work digital event:

 

Google is hosting a Gemini at Work digital event, aimed at helping customers tap into the power of Gemini AI in the workplace.

The event is sceduled for Tuesday, September 24 at 9am PT. The event will feature Google Cloud CEO Thomas Kurian as the keynote speaker.

Google Cloud CEO Thomas Kurian will kick off the event with a keynote highlighting how AI is reshaping business across the globe. He’ll be followed by in-depth explorations of AI’s influence on specific domains, including customer engagement and code development, with insights from companies such as Box on integrating Gemini for intelligent content management. We’ll also unveil exciting new AI innovations, share best practices for maximizing Gemini’s potential, and demonstrate Gemini in action.

The event will also feature leaders from Bosh, Snap, and Randstad highlighting how generative AI is tranforming their industries.

Those interested in attending can register here.

Some of the additional seesion include:

In What’s next for generative AI on Google Cloud, Saurabh Tiwary, VP and general manager, Cloud AI, Google, and Amin Vahdat, VP and general manager, Machine Learning, Systems, and Cloud AI, Google Cloud, will share innovations that make it easier for you to access your preferred models, customize them to your unique needs, and deploy them seamlessly with enterprise-grade controls.

Box is unleashing intelligent content management with Gemini, and Yashodha Bhavnani, VP, product management, AI Products, Box, is joining us to share how they’re building the next generation of intelligent content management solutions.

If you love BigQuery, it’s even better with Gemini. Deepak Dayama, product lead, Gemini in BigQuery, Google Cloud, will share how to boost data analysis with Gemini right within BigQuery, making intelligent recommendations to enhance user productivity and optimize costs.

]]>
608547
Google’s New AI Feature is Scary Good: A Revolution in AI-Generated Content https://www.webpronews.com/googles-new-ai-feature-is-scary-good-a-revolution-in-ai-generated-content/ Tue, 17 Sep 2024 11:19:45 +0000 https://www.webpronews.com/?p=608294 Google’s latest artificial intelligence (AI) tool has the tech world abuzz—and for good reason. Integrated into its Notebook LM platform, this new feature allows users to generate full-length, human-like podcasts from documents, PDFs, or web links in just minutes. This leap in AI-driven content creation offers a glimpse into the future of productivity, with profound implications for business, education, and media. The question isn’t just about how this technology will change the way we work—it’s about how prepared we are for the transformation.

“This is one of the most significant developments in AI content creation,” says Wes Roth, a technology analyst who has been closely following AI advancements. “Google’s Notebook LM started as a tool for summarizing and extracting insights from documents. But now, with the ability to generate podcasts that sound like real people having a natural conversation, we’re entering an entirely new era of AI-driven media.”

The audio podcast it creates is similar to a radio talk show. Here is a podcast of this article produced by the Notebook LM tool. It is scary good:

 

A New Era for Content Creation

Originally launched as a note-taking and document analysis tool, Notebook LM allowed users to upload documents or links and receive summaries or insights from the AI in text format. The tool has proven invaluable for researchers, executives, and professionals who need to process large amounts of information quickly. However, Google’s latest addition takes that convenience to another level by adding voice-based content generation.

With this new feature, users can upload documents and receive a podcast version that not only reads the content but also analyzes it, with two hosts—one male, one female—discussing the material. The AI voices sound remarkably human, with natural pacing, diction, and tone, making the experience feel less like listening to a machine and more like tuning into a well-produced podcast.

“It’s surreal how human it feels,” Roth noted. “The AI voices have perfect diction and a conversational style that makes it easy to listen to. It’s not just reading; it’s engaging, and that’s what makes it so compelling for professionals.”

For busy executives, this functionality could be a game-changer. Imagine uploading a dense financial report or a white paper on market trends, and within minutes, receiving a podcast that explains the key takeaways in a format you can listen to during your commute. “Time is the most valuable resource for executives,” says Michael Feldman, a venture capitalist focused on technology. “This AI tool could save hours of reading and synthesizing information, making it a must-have for leaders in every industry.”

The Implications for Business and Productivity

The potential applications of Google’s AI tool are far-reaching, especially in corporate settings where time efficiency is paramount. Business leaders are constantly inundated with data, reports, and white papers that require careful review. The ability to quickly convert that information into an easily digestible format is a boon for productivity.

“This could redefine how we consume information in business,” says Erica Simmons, an AI researcher and consultant for Fortune 500 companies. “Imagine sending out AI-generated podcasts to your entire team, summarizing the key points from quarterly reports or market analyses. Instead of long meetings or tedious emails, you can distribute concise, engaging podcasts that everyone can listen to on their own time.”

The flexibility of Notebook LM also means that executives can customize the information they want to focus on. “You can ask the AI specific questions about the document, and it will generate content based on those inquiries,” Simmons adds. “It’s not just a one-size-fits-all approach. The AI can tailor the output to your exact needs, whether that’s a focus on financial data, market trends, or competitor analysis.”

For industries that rely on up-to-the-minute information—such as finance, technology, and healthcare—the ability to rapidly digest and distribute complex data will be a major competitive advantage. “Speed is everything in today’s market,” says Feldman. “If you can stay ahead of the curve by processing information faster, you’ll be in a better position to make strategic decisions.”

Education and Professional Development: A Game Changer

Beyond business, the new AI feature has enormous potential in education and professional development. Professors, trainers, and students can use the tool to create engaging learning content, distilling long readings or research papers into manageable audio segments. This not only makes learning more accessible but also more personalized.

“In academia, we’re always looking for ways to make dense materials more digestible,” says Dr. Lisa Hayes, a professor at MIT specializing in human-AI interaction. “This tool could revolutionize how we approach learning, especially for professionals who are juggling work and education. Instead of reading a 50-page report, you could get a 10-minute podcast that hits all the major points and allows you to absorb the material while driving or exercising.”

Hayes sees this tool as particularly useful for executive education programs, where time is at a premium. “For executives pursuing further education, this could be a significant asset. Imagine getting an AI-generated podcast after every class that summarizes key lessons or offers insights from the assigned readings. It’s not just about efficiency—it’s about reinforcing learning in a way that aligns with busy schedules.”

Moreover, the potential for AI-generated podcasts in corporate training programs is immense. Companies could use the tool to convert training manuals, onboarding materials, or compliance guides into engaging audio formats that employees can listen to at their convenience. “This could change how companies approach training and development,” says Simmons. “It’s about meeting employees where they are, and making learning more accessible, more engaging, and more efficient.”

The Ethical Questions: AI Content in a Post-Human World?

While the benefits are clear, Google’s AI development also raises important ethical considerations. As AI-generated content becomes more indistinguishable from human-created media, questions about authenticity, accuracy, and the role of human oversight come to the forefront. One concern is the potential for AI to spread misinformation, particularly if the AI’s ability to “hallucinate” incorrect information goes unchecked.

“Hallucinations are still an issue,” Roth admits. “While Google’s AI is incredibly advanced, it’s not infallible. It can misinterpret data or present it in a way that’s misleading. For businesses, this could have serious implications if critical decisions are made based on AI-generated content.”

The accuracy of AI-generated content is particularly important in industries like law, healthcare, and finance, where even small mistakes can have significant consequences. “There needs to be a layer of human oversight,” says Simmons. “The AI is a tool—it’s not a replacement for human judgment. Executives need to be mindful of the limitations and ensure that they’re verifying the information, especially when making high-stakes decisions.”

Another concern is the impact on jobs, particularly in industries like media, marketing, and content creation, where AI tools could potentially replace human workers. “We’re entering an era where AI will not just assist but could eventually replace certain roles,” says Feldman. “Content creators, podcast hosts, and even educators might find themselves competing with machines that can produce high-quality content faster and cheaper. This raises critical questions about the future of work.”

The Future of AI-Driven Media and Communication

Despite the ethical concerns, one thing is clear: Google’s Notebook LM represents a major step forward in AI-driven content creation, and its impact will be felt across industries. From streamlining business processes to transforming education, this tool has the potential to reshape how we consume and create media.

“It’s not just about efficiency—it’s about changing the way we interact with information,” says Hayes. “For professionals, executives, and educators, this tool could become an essential part of their daily workflow.”

As AI continues to evolve, the possibilities for personalized, AI-generated content are endless. Future iterations of Google’s Notebook LM could allow for even more customization, with options to adjust tone, style, and even the personalities of the AI-generated hosts. “We’re only scratching the surface of what’s possible,” says Simmons. “Imagine being able to upload multiple sources—news articles, reports, financial data—and have the AI synthesize it all into a comprehensive podcast that gives you a full view of an issue in minutes. That’s the future we’re looking at.”

A New Era of Productivity and Media

Google’s new AI feature is a clear example of how technology can dramatically enhance productivity and efficiency in the professional world. Whether used by executives to streamline decision-making, educators to enhance learning, or businesses to transform corporate communication, the potential is vast. However, with this power comes responsibility. Professionals must remain vigilant in ensuring the accuracy of AI-generated content, and ethical considerations about the impact on jobs and information integrity must not be ignored.

“The technology is undeniably powerful,” says Roth, “but it’s up to us to use it wisely. AI can augment our abilities, but we need to maintain control and judgment to ensure it serves us, not the other way around.”

As Google continues to push the boundaries of what AI can achieve, the line between human and machine-created content will blur further. The future of work, learning, and media is undoubtedly being shaped by tools like Notebook LM—the only question is how quickly the rest of the world will catch up.

]]>
608294
Salesforce Bets Big With Agentforce, Its ‘Third Wave of AI’ https://www.webpronews.com/salesforce-bets-big-with-agentforce-its-third-wave-of-ai/ Mon, 16 Sep 2024 17:50:11 +0000 https://www.webpronews.com/?p=608260 Salesforce is betting big on its new Agentforce, saying “represents the Third Wave of AI” and “is what AI was meant to be.”

Salesforce has been working to establish itself as the provider of safe AI solutions that companies can use throughout their workflows to drive insights and educate decision making. The company’s latest tool, Agentforce, is “a groundbreaking suite of autonomous AI agents” designed to help companies scale their workforces more effectively.

Agentforce’s limitless digital workforce of AI agents can analyze data, make decisions, and take action on tasks like answering customer service inquiries, qualifying sales leads, and optimizing marketing campaigns. With Agentforce, any organization can easily build, customize, and deploy their own agents for any use case across any industry.

“Agentforce represents the Third Wave of AI—advancing beyond copilots to a new era of highly accurate, low-hallucination intelligent agents that actively drive customer success. Unlike other platforms, Agentforce is a revolutionary and trusted solution that seamlessly integrates AI across every workflow, embedding itself deeply into the heart of the customer journey. This means anticipating needs, strengthening relationships, driving growth, and taking proactive action at every touchpoint,” said Marc Benioff, Chair and CEO, Salesforce. “While others require you to DIY your AI, Agentforce offers a fully tailored, enterprise-ready platform designed for immediate impact and scalability. With advanced security features, compliance with industry standards, and unmatched flexibility. Our vision is bold: to empower one billion agents with Agentforce by the end of 2025. This is what AI is meant to be.”

According to the company, what sets Agentforce apart from the previous generation of “copilots and chatbots” is the human element. Whereas the previous generation of AI tools relied on human requests, and sometimes struggled with complex tasks, Agentforce is designed to be autonomous, analyzing data and coming up with solutions without human intervention or prompting.

Salesforce says Agentforce will help free employees from completing repetitive, low-impact work, enabling them to focus on more productive tasks.

An estimated 41% of employee time is spent on repetitive, low-impact work, and 65% of desk workers believe generative AI will allow them to be more strategic, according to the Salesforce Trends in AI Report. Every company has more jobs to be done than the resources available to do them. As a result, many jobs go unaddressed or uncompleted. Agentforce provides relief to overstretched teams with its ability to scale capacity on demand so humans can focus on higher-touch, higher-value, and more strategic outcomes. The future of work is a hybrid workforce composed of humans with agents, enabling companies to compete in an ever-changing world.

According to the company, some of its industry-leading customers are already relying on Agentforce.

“As we advance our personalization strategy, we believe Agentforce and its AI-powered capabilities have the potential to make a real impact on our approach to customer engagement, raising the bar in luxury retail. Agentforce will improve our effectiveness across customer touchpoints, empowering our employees and augmenting their ability to deliver the elevated and more individualized shopping experiences for which Saks is known.” – Mike Hite, Chief Technology Officer, Saks Global

“Piloting Agentforce has made a noticeable difference during one of our busiest periods — back-to-school season. It’s been exciting to go live with our first agent thanks to the no-code builder, and we’ve seen a more than 40% increase in case resolution, outperforming our old bot. Agentforce helps to manage routine responsibilities and free up our service teams for more complex cases.” – Kevin Quigley, Senior Manager, Continuous Improvement, Wiley

“Every interaction that restaurants and diners have with our support team must be accurate, fast, and reflective of the hospitality that restaurants show their guests. Agentforce has incredible potential to help us deliver that high touch attentiveness and support while significantly freeing up our team to address more complex needs.” – George Pokorny, SVP Customer Success, OpenTable

]]>
608260
Nevada Taps Google’s AI to Process Unemployment Claims https://www.webpronews.com/nevada-taps-googles-ai-to-process-unemployment-claims/ Mon, 16 Sep 2024 12:00:00 +0000 https://www.webpronews.com/?p=608220 Nevada plans to take a novel approach address its backlog of unemployment claims, tapping Google’s AI to help it work through them.

According to Gizmodo, Nevada and Google will deploy the first generative AI system designed to help a state process unemployment claims. Needless to say, the endeavor comes with significant risk, and could be one of the most important tests of generative AI systems to date.

The outlet says the AI will help reduce the time it takes to write a determination to a mere five minutes, instead of the several hours it currently takes. The AI will analyze data, including transcripts and documents, before making a recommendation regarding whether a claim should be granted.

Nevada officials are quick to point out that no claim will be decided by the AI, but that it will merely be used to process data and make a recommendation.

“There’s no AI [written decisions] that are going out without having human interaction and that human review,” said Christopher Sewell, director of the Nevada Department of Employment, Training, and Rehabilitation (DETR). “We can get decisions out quicker so that it actually helps the claimant.”

Despite the assurance, not everyone is convinced the system will be safe enough.

“The time savings they’re looking for only happens if the review is very cursory,” Morgan Shah, director of community engagement for Nevada Legal Services, told Gizmodo. “If someone is reviewing something thoroughly and properly, they’re really not saving that much time. At what point are you creating an environment where people are sort of being encouraged to take a shortcut?”

Companies and organizations have struggled to find use cases for AI that justify the high price associated with developing AI models. If Google is able to deliver what Nevada needs—without significant issues—it could open up a whole new market for generative AI firms.

]]>
608220
AI Companies Commit to Combating Sexual Abuse Deepfakes https://www.webpronews.com/ai-companies-commit-to-combating-sexual-abuse-deepfakes/ Fri, 13 Sep 2024 02:20:50 +0000 https://www.webpronews.com/?p=607990 The White House announced voluntary commitments from leading AI firms, with the companies agree to combat AI-generated deepfakes.

Deepfakes have been a concern surrounding AI long before OpenAI made the technology accessible to everyday users. As AI-powered image generators have continued growing in popularity, there have been the inevitable abuse of such technologies.

The White House has been working with AI firms to try to establish safeguards designed to protect people from deepfakes, especially in the context of sexual abuse. According to the administration, Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI have all made varying commitments.

  • Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI commit to responsibly sourcing their datasets and safeguarding them from image-based sexual abuse.
  • Adobe, Anthropic, Cohere, Microsoft, and OpenAI commit to incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse.
  • Adobe, Anthropic, Cohere, Microsoft, and OpenAI, when appropriate and depending on the purpose of the model, commit to removing nude images from AI training datasets.

The White House also highlighted additional measures various companies have taken to combat the problem.

  • Cash App and Square are curbing payment services for companies producing, soliciting, or publishing image-based sexual abuse, including through additional investments into resources, systems, and partnerships to detect and mitigate payments for image-based sexual abuse.
  • Cash App and Square commit to expanding participation in industry groups and initiatives that support signal sharing to detect sextortion and other forms of known image-based sexual abuse to help detection and limit payment services.
  • Google continues to take actions across its platforms to address image-based sexual abuse, including updates in July to its search engine to further combat non-consensual intimate images.
  • GitHub, a Microsoft company, has updated its policies to prohibit the sharing of software tools that are designed for, encourage, promote, support, or suggest in any way the use of synthetic or manipulated media for the creation of non-consensual intimate imagery.
  • Microsoft is partnering with StopNCII.org to pilot efforts to detect and delist duplicates of survivor-reported non-consensual intimate imagery in Bing’s search results; developing new public service announcements to promote trusted, authoritative resources about image-based sexual abuse for victims and survivors; and continuing to demote low quality content across its search engine.
  • Meta continues to prohibit the promotion of applications or services to generate image-based sexual abuse on its platforms, has incorporated solutions like StopNCII and TakeItDown directly into its reporting systems, and announced it had removed around 63,000 Instagram accounts that were attempting to engage in financial sextortion scams in July. Meta also recently expanded its existing partnership with the Tech Coalition to include sharing signals about sextortion activity via the Lantern program, helping to disrupt this criminal activity across the wider internet.
  • Snap Inc. commits to strengthening reporting processes and promoting resources for survivors of image-based sexual abuse through in-app tools and via their websites.

The announcement shows the effort companies and lawmakers are putting forth to ensure the safe deployment of AI, along with the challenges that are involved in accomplishing that.

]]>
607990
OpenAI Says It Now Has One Million Paid Business Users https://www.webpronews.com/openai-says-it-now-has-one-million-paid-business-users/ Wed, 11 Sep 2024 17:52:52 +0000 https://www.webpronews.com/?p=607882 OpenAI’s path toward profitability appears to be headed in the right direction, with the company reporting that it has one million paid business users.

Turning a profit is one of the largest challenges facing generative AI firms, as investors pour billions into a technology that has yet to have its defining, “can’t live without it” moment. Despite the challenges, Bloomberg reports OpenAI is touting the fact it now has one million business users paying for ChatGPT.

As the outlet points out, this is a major uptick from the 600,000 paying users OpenAI reported in April, no doubt boost by OpenAI targeting the enterprise with privacy controls designed to help companies use generative AI without risking corporate secrets.

Despite crossing the one million user milestone, OpenAI is reportedly still exploring additional ways to monetize ChatGPT. The company is even looking at charging as much as $2,000 for a ChatGPT subscription, although it is doubtful the company will charge quite that much.

The company’s upcoming “Strawberry” model could help drive even further growth, with the new model specifically designed for enterprise use cases.

]]>
607882
Meta’s AI Scrapes Data From Millions of Australian Users—With No Opt-Out Option https://www.webpronews.com/metas-ai-scrapes-data-from-millions-of-australian-users-with-no-opt-out-option/ Wed, 11 Sep 2024 14:24:56 +0000 https://www.webpronews.com/?p=607840 Meta, the parent company of Facebook and Instagram, is once again facing scrutiny, this time over its practice of using Australian users’ data to train its AI algorithms without offering an opt-out option. During a parliamentary inquiry into AI adoption in Australia, Meta’s global privacy director, Melinda Claybaugh, confirmed that the company has been collecting and using the public posts, photos, and comments of Australian users since 2007 to build its AI systems. This revelation has triggered a wave of concern over privacy rights and corporate transparency.

No Opt-Out for Australians

In Europe, Meta provides users with the ability to opt out of having their data used for AI training, a result of strict privacy laws like the General Data Protection Regulation (GDPR). However, Australian users do not enjoy the same protections. When asked why Australians were not afforded this option, Claybaugh cited the legal landscape, saying, “In Europe, there is an ongoing legal question around the interpretation of existing privacy law with respect to AI training.” She further admitted that, while European users could control how their data was used, Australians were left with no such mechanism.

David Shoebridge, a Greens senator in Australia, did not mince words. “The truth of the matter is that unless you have consciously set those posts to private since 2007, Meta has scraped all of the photos and texts from every public post on Instagram or Facebook since then. That’s the reality, isn’t it?” To this, Claybaugh conceded, “Correct.”

The Data Collection Controversy

Meta’s admission has sparked a fierce debate over privacy rights and the responsibilities of tech giants. Senator Tony Sheldon questioned whether the company had used Australian posts from as far back as 2007 to feed its AI products. Initially, Claybaugh denied the claim, but when pressed by Shoebridge, she confirmed that the public posts of Australians were indeed being used for AI training.

This data collection process, referred to by some as “scraping,” involves using publicly available content to train algorithms that power AI products like Meta’s generative AI tools. While it’s legal for Meta to use content uploaded to its platforms, the lack of transparency and absence of user consent has raised ethical questions.

“The government’s failure to act on privacy means companies like Meta are continuing to monetize and exploit pictures and videos of children on Facebook,” Shoebridge said, highlighting how even photos of children posted by parents on public accounts were included in the data collection. This adds another layer of complexity to the debate, as it touches on sensitive issues around children’s privacy.

Legal and Ethical Implications

Unlike its European counterparts, Australia has not enacted similarly robust privacy laws. Meta’s willingness to provide opt-out options in Europe but not elsewhere illustrates how regulatory environments shape corporate behavior. “Meta made it clear today that if Australia had these same laws, Australians’ data would also have been protected,” Shoebridge remarked. This sentiment underscores the urgency for Australia to revisit its privacy laws, especially as AI becomes increasingly embedded in everyday life.

Meta, on its part, defends its actions by pointing to the global need for data to develop effective AI tools. Claybaugh noted that AI models require vast amounts of data to function effectively, and that this data helps build more powerful and less biased AI systems. She argued that training AI with this kind of large dataset allows Meta to create “more flexible and powerful” tools.

Does Using Anonymous Data for AI Training Hurt Privacy?

At the heart of the debate surrounding Meta’s use of Australian Facebook and Instagram data for AI training is the question: Does the anonymous use of public data infringe on users’ privacy? While concerns over privacy violations are valid, it’s essential to clarify what is actually happening behind the scenes with AI training.

Facebook is not technically “scraping” in the sense of extracting external data from the web, as companies like Google do for search engines. Rather, it is incorporating data from its own platform—data users have willingly uploaded into its ecosystem. As noted by privacy experts, “Meta is using its own database legally.” Unlike traditional scraping methods that gather and unveil personal data from various corners of the internet, Meta is working within its own framework, meaning it does not disclose individual posts or images but uses them to enhance AI models anonymously. The primary goal is to help algorithms understand how people communicate and what images represent without exposing or revealing personal information.

This approach raises an important distinction: using data anonymously to train AI models is not a direct privacy violation. Facebook’s use of anonymous data, when properly anonymized, doesn’t reveal individual user identities. “How is training an algorithm a privacy violation? The answer: It isn’t,” one expert noted. The AI isn’t learning who posted a particular picture or what a specific individual wrote; instead, it’s learning patterns of communication, sentiment, and image composition. This means that while the dataset includes millions of posts, the AI is learning broadly from collective behaviors, not specific ones.

It’s worth pointing out that other platforms like Google also leverage publicly available data for similar purposes. Our public content is constantly being “scraped” and indexed by search engines, but few view this as a privacy breach. Similarly, Facebook’s data usage follows a similar path, using its own resources to build its AI tools.

Critics argue that regardless of anonymization, users should have the choice to opt-out. As Senator David Shoebridge stated, “People feel as if their inherent rights have been taken off them.” Yet, without demonstrable harm or the public exposure of personal information, it’s hard to argue that this practice is a genuine privacy violation. The real issue, many experts assert, is transparency and consent. Should users be more informed, or have more control over how their data is used in these vast AI learning systems?

Ultimately, the impact of using anonymized data for AI training on privacy is minimal, especially when compared to actual data leaks or misuse of personal information. The lack of an opt-out option in Australia does spark debate, but it doesn’t necessarily equate to a breach of personal privacy. As one privacy advocate remarked, “No harm, no foul. End of story.” The onus now lies on regulators and companies like Meta to better inform and empower users, while balancing innovation with respect for privacy.

Growing Pressure for Privacy Reform

Meta’s handling of user data is likely to intensify calls for legislative reform in Australia. The government is expected to announce long-awaited amendments to the Privacy Act, which has been deemed outdated in light of recent technological advancements. Attorney-General Mark Dreyfus had promised to introduce these reforms in August 2024, but as of September, they remain under wraps.

For critics like Shoebridge, the lack of regulatory oversight in Australia has created a permissive environment for tech giants to collect and utilize user data without sufficient accountability. “There’s a reason that people’s privacy is protected in Europe and not in Australia,” he said. “It’s because European lawmakers made tough privacy laws.”

What’s Next for Meta and Australian Users?

The admission that Australians have no option to opt out of their data being used for AI training leaves open the question of whether Meta will face regulatory action in the country. While Meta has paused launching its AI products in Europe due to the legal uncertainty, no such delay has occurred in Australia.

Many industry experts see this as a watershed moment for Australia’s tech and privacy landscape. Adam Barty, a managing director at the digital consultancy Revium, highlighted how Australia’s current regulatory framework lags behind other countries. “If you are in Australia, you can’t opt out…unless you manually go through and make all your content private, albeit there is no guarantee that will work as there is no transparency on when the data scrape has, or will, happen,” Barty stated.

As the inquiry into AI adoption continues, the Australian public and lawmakers are likely to press for more stringent privacy protections, potentially forcing Meta and other tech companies to reconsider their data practices in the region.

Meta’s use of Australian Facebook and Instagram data to train its AI models, without offering an opt-out option, has ignited a national debate over privacy rights. As lawmakers grapple with how to regulate AI in a rapidly evolving technological landscape, Australians are left in a legal limbo, lacking the protections their European counterparts enjoy. With privacy reforms on the horizon, the question remains: Will Australia follow Europe’s lead in defending citizens’ data rights, or will tech companies like Meta continue to operate with minimal oversight?

]]>
607840
Apple Intelligence to Begin Rolling Out Next Month https://www.webpronews.com/apple-intelligence-to-begin-rolling-out-next-month/ Mon, 09 Sep 2024 23:12:15 +0000 https://www.webpronews.com/?p=607764 Apple kicked off the “It’s Glowtime” event launching new hardware and providing a definitive update on its Apple Intelligence plans.

Apple Intelligence is the company’s “personal intelligence system that combines the power of generative models with personal context.” Since the company first demoed Apple Intelligence, it has provided one of the greatest demonstrations of the day-to-day value of generative AI systems for the average user.

Reports had surfaced as early as late July that Apple Intelligence would debut with iOS 18.1, not 18.0. Monday’s event helped provide a concrete timeline for when users can expect to get there hands on the tech.

Today, Apple announced that Apple Intelligence, the personal intelligence system that combines the power of generative models with personal context to deliver intelligence that is incredibly useful and relevant, will start rolling out next month with iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1, with more features launching in the coming months. In addition, Apple introduced the new iPhone 16 lineup, built from the ground up for Apple Intelligence and featuring the faster, more efficient A18 and A18 Pro chips — making these the most advanced and capable iPhone models ever.

Apple Intelligence first launches in U.S. English, and will quickly expand to include localized English in Australia, Canada, New Zealand, South Africa, and the U.K. in December, with additional language support — such as Chinese, French, Japanese, and Spanish — coming next year.

Apple goes on to reiterate the benefits users can expect from Apple Intelligence.

With Writing Tools, users can refine their words by rewriting, proofreading, and summarizing text nearly everywhere they write, including Mail, Notes, Pages, and third-party apps.

In Photos, the Memories feature now enables users to create the movies they want to see by simply typing a description. In addition, natural language can be used to search for specific photos, and search in videos gets more powerful with the ability to find specific moments in clips. The new Clean Up tool can identify and remove distracting objects in the background of a photo — without accidentally altering the subject.

In the Notes and Phone apps, users can record, transcribe, and summarize audio. When a recording is initiated while on a call in the Phone app, participants are automatically notified, and once the call ends, Apple Intelligence also generates a summary to help recall key points.

The company emphasizes its privacy-first approach, with many of the models running locally on-device.

Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia, harnessing the power of Apple silicon to understand and create language and images, take action across apps, and draw from personal context to simplify and accelerate everyday tasks — all while protecting users’ privacy and security. Many of the models that power Apple Intelligence run entirely on device, and Private Cloud Compute offers the ability to flex and scale computational capacity between on-device processing and larger, server-based models that run on dedicated Apple silicon servers.

]]>
607764
AI’s Content Grab: Are Companies Crossing the Line with Copyrighted Material? https://www.webpronews.com/ais-content-grab-are-companies-crossing-the-line-with-copyrighted-material/ Sat, 07 Sep 2024 08:53:28 +0000 https://www.webpronews.com/?p=607631 Artificial intelligence (AI) has rapidly become one of the most transformative technologies of the 21st century, reshaping industries from healthcare to entertainment. But behind the excitement lies a growing controversy over how AI companies are acquiring and using content to train their models. Specifically, many are using copyrighted material without permission, raising legal and ethical questions about whether this practice can be considered “fair use.” As AI-generated content floods the market, stakeholders—from artists to tech companies—are debating the implications of this practice and what it means for creators, companies, and the future of intellectual property.

The Unfolding Crisis: AI Training on Copyrighted Content

AI’s dependence on vast datasets to learn how to perform tasks like generating text, images, and videos has sparked concerns over how companies are acquiring that data. For instance, the viral AI video startup Viggle recently admitted to training its models on YouTube videos without explicit permission. Viggle is not alone. Major players such as NVIDIA and Anthropic are facing similar accusations.

https://twitter.com/ViggleAI/status/1832114003562394013

“YouTube’s CEO has called it a ‘clear violation’ of their terms,” explains Mike Kaput, Chief Content Officer at the Marketing AI Institute. “Yet most AI companies are doing it, betting on a simple strategy: Take copyrighted content, hope nobody notices, and if you succeed, hire lawyers.” This has become a common approach in the rapidly developing AI sector, as companies rush to build more powerful models, often without securing proper licenses.

The underlying issue is the use of copyrighted material—often created by individual content creators or large media companies—without any compensation or acknowledgment. In Kaput’s view, this strategy banks on the public’s indifference: “Most people see cool AI videos and think: ‘Wow, that’s amazing!’ They don’t ask: ‘Wait, how was this trained?’”

Is This Fair Use or a Copyright Violation?

The heart of the debate lies in how copyright law defines “fair use,” a legal doctrine that allows limited use of copyrighted material without permission, usually for purposes such as criticism, comment, news reporting, teaching, or research. But does AI training fall under this category?


“It hinges on a key distinction in copyright law: whether a work is transformative or derivative,” says Christopher Penn, Co-Founder and Chief Data Scientist at TrustInsights.ai. He explains that if AI-generated content is seen as transformative—meaning it adds new expression or meaning to the original work—it may be protected under fair use. However, if it is deemed derivative, merely replicating the original content, it could violate copyright laws.

“In the EU, regulators have said using copyrighted data for training without permission infringes on the copyright owner’s rights,” Penn continues. “In Japan and China, regulators have taken the opposite stance, saying the model is in no way the original work, and thus does not infringe.”

This leads to a critical question: Is the legal responsibility on the tool (the AI itself) or the user who generates content with it? “Only resolved court cases will tell,” Penn concludes.

The Public’s Indifference: Do People Care?

While the legal community is wrestling with these issues, the broader public seems largely disengaged from the debate. Justin C., co-founder of Neesh.AI, suggests that the average person is indifferent to AI’s data practices. “Most people feel like it’s out of their control,” he says. “They aren’t paying attention because it doesn’t directly affect them.” This lack of awareness means that AI companies have little fear of public backlash, as long as they continue delivering impressive products.

Similarly, Paul Guds, an AI management consultant, believes that the momentum behind AI development is too strong to stop. “The gains for the public outweigh the potential costs,” he argues. “Regulation on this matter will take years, and litigation will be costly and lengthy. In the end, this train cannot be stopped, worst case, it will be slowed down slightly.”

However, some believe this complacency could come with significant costs. “It feels a lot like Uber when it started,” says Melissa Kolbe, an AI and marketing strategist. “Just worry about the consequences later. The public doesn’t really care—unless it’s their own video.”

The Artistic Backlash: Protecting Creativity

While many in the tech community view AI as a tool for innovation, artists and creators feel differently. For them, the unchecked use of their work for AI training represents a threat to their livelihoods and the integrity of creative expression.

“The only people that really care about this are genuine artists,” says Jim Woolfe, an electronic musician. “The problem is that it’s become harder to tell the difference between real and generated content, and true creativity is in danger of being drowned out by bland, AI-generated art.” Woolfe predicts a backlash as more artists realize the scope of what’s at stake.

Others agree that AI could erode the value of original content. “It’s already harder to make a living as a creator,” says Reggie Johnson, a communication strategist. “Now, Big Tech companies are using copyrighted content to train AI without permission, and the government seems to be letting them get away with it.” Johnson points to the recent rejection of the Internet Archive’s appeal, a case that has sparked debate about whether AI companies are playing by a different set of rules than other industries.

Legal Implications: Can Copyright Law Keep Up?

The rapid pace of AI innovation is exposing gaps in current copyright laws. “Laws around copyright are already out of date,” says Doug V., a digital strategist. “With AI using content without permission or attribution, it’s a very complicated knot to unravel.” He anticipates that companies will begin inserting clauses into their terms and conditions, effectively requiring users to waive rights to their content for AI training purposes. “What artist will willingly upload their creations to social media if they’re effectively giving it all away for others to make derivatives of their work?” Doug asks.

This concern is echoed by Elizabeth Shaw, an AI strategy director, who suggests that the issue may soon become a hot topic in AI policy discussions. “Are we teasing an upcoming panel on this at MAICON?” she asks, referencing the Marketing Artificial Intelligence Conference.

The Future of AI and Copyright: What Comes Next?

As AI continues to evolve, the questions surrounding the use of copyrighted material will become more pressing. Some predict that regulation is inevitable, but it will take years to catch up. “I don’t think there’s a way to stop it,” Kaput admits. “Pandora’s box is already open.”

However, others believe the issue will come to a head sooner rather than later. “I predict a movement will rise that values real art over AI-generated content,” says Woolfe. “Once people realize what’s at stake, there will be a backlash.”

For now, the debate over whether AI companies can freely use copyrighted content for training remains unresolved. As courts begin to take on these cases, the line between fair use and infringement will continue to blur, leaving creators, companies, and lawmakers to grapple with the implications of AI’s rapid advancement.

In the meantime, it’s clear that AI is not just a technological innovation—it’s a legal and ethical minefield. As Guds puts it, “We’re trending toward falling off the slippery slope. The question is: how do we stop it?”

]]>
607631
Prepare for $2,000 ChatGPT Subscriptions https://www.webpronews.com/2000-chatgpt-subscriptions/ Fri, 06 Sep 2024 00:01:09 +0000 https://www.webpronews.com/?p=607565 ChatGPT fans may be in for a rude awakening, with OpenAI reportedly investigating subscription options that could be as high as $2,000.

According to Financial Times, OpenAI executives are trying to find the subscription sweet spot, one where the company can make money off of its AI models, yet still drive subscriber growth with a price point customers will accept. Unfortunately, FT reports that $2,000 subscription fees are being discussed, although nothing has been decided.

OpenAI’s subscription dilemma is indicative of the challenges the AI industry is facing in general. Financial firms and investors have increasingly been sounding the alarm over the high price tag that comes with generative AI development.

In fact, the high cost has been cited as one of the reasons AI could be the tech industry’s latest bubble, rather than a transformative tech that’s here to stay. Jim Covello, Goldman Sachs Head of Global Equity Research, compared AI to earlier tech revolutions, saying its high cost limits its ability to have the same impact.

Many people attempt to compare AI today to the early days of the internet. But even in its infancy, the internet was a low-cost technology solution that enabled e-commerce to replace costly incumbent solutions. Amazon could sell books at a lower cost than Barnes & Noble because it didn’t have to maintain costly brick-and-mortar locations. Fast forward three decades, and Web 2.0 is still providing cheaper solutions that are disrupting more expensive solutions, such as Uber displacing limousine services. While the question of whether AI technology will ever deliver on the promise many people are excited about today is certainly debatable, the less debatable point is that AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.

While it’s hard to imagine that OpenAI will go with a $2,000 subscription, the fact that it is even discussing such a high price underscores the growing pressure OpenAI—and the AI industry in general—is under to start recouping the massive investments that have been made.

]]>
607565
Apple Reportedly Considering An Investment In OpenAI https://www.webpronews.com/apple-reportedly-considering-an-investment-in-openai/ Fri, 30 Aug 2024 00:04:33 +0000 https://www.webpronews.com/?p=607057 Apple is reportedly making a rare move, investigating a potential investment in OpenAI after announcing a deal to include ChatGPT in its products.

Apple took the opportunity to announce its Apple Intelligence features at WWDC 2024. Apple Intelligence is based on Apple’s own intelligence models, but the company also tapped OpenAI’s ChatGPT for more advanced functions. The decision to partner with OpenAI was seen as a major win for the AI firm, especially since Apple turned its back on discussions with Meta and only promised possible integration with Google’s Gemini or Anthropic’s Claude at some future date.

According The Wall Street Journal, Apple is in talks to join a round of investment in OpenAI that is being led by Thrive Capital and also includes Microsoft. The round of investment is reportedly several billion dollars, and would leave OpenAI with at least a $100 billion valuation.

As WSJ points out, Apple rarely invests in startups, amking the reports all the more interesting. Like many companies in the tech industry, Apple clearly sees generative AI as a critical feature, one worth breaking with its historical pattern.

]]>
607057
Grok-2: Revolutionizing AI or Just More of the Same? A Deep Dive into the Latest Large Language Model https://www.webpronews.com/grok-2-revolutionizing-ai-or-just-more-of-the-same-a-deep-dive-into-the-latest-large-language-modelthe-rapid-evolution-of-artificial-intelligence-ai-has-brought-about-a-constant-influx-of-new-mode/ Fri, 23 Aug 2024 23:40:08 +0000 https://www.webpronews.com/?p=606769

The rapid evolution of artificial intelligence (AI) has brought about a constant influx of new models and technologies, each promising to push the boundaries of what machines can achieve. Among these developments, Grok-2, the latest large language model (LLM) from xAI, stands out as both a potential game-changer and a source of controversy. Unlike its predecessors, Grok-2 arrived with little fanfare—no accompanying research paper, no detailed model card, and no formal academic endorsement. This mysterious launch has fueled a mixture of excitement and skepticism within the AI community, raising important questions about the future direction of AI development.

The Silent Debut of Grok-2

In the world of AI, new models are typically introduced with extensive documentation, including research papers that detail the architecture, training methods, and benchmarks of the model. Grok-2, however, broke from this tradition. It was released quietly, with only a basic Twitter chatbot available for public interaction. This lack of transparency has left many AI researchers puzzled and concerned. As one AI researcher put it, “It’s unusual, almost unheard of, to release a model of this scale without any academic backing or explanation. It raises questions about the model’s capabilities and the motivations behind its release.”

Despite the unconventional launch, Grok-2 has already demonstrated impressive capabilities. Early tests have shown that it performs well on several key benchmarks, including the Google Proof Science Q&A Benchmark and the MLU Pro, where it ranks second only to Claude 3.5 Sonic. These results suggest that Grok-2 has the potential to compete with the best LLMs currently available. However, the absence of detailed performance metrics and the opaque nature of its release have led to a mix of curiosity and skepticism.

One commenter on the ‘AI Explained’ YouTube channel encapsulated the general sentiment: “No paper? Just a table with benchmarks. What are the performance claims for Grok-2 really based on? Benchmarks have been repeatedly proven meaningless by this point.”

The Scaling Debate: Is Bigger Always Better?

A central topic in the ongoing AI discourse is the concept of scaling—essentially, the idea that increasing the size of a model, in terms of parameters and training data, will lead to better performance. This debate has been reignited by Grok-2 and a recent paper from Epoch AI, which suggests that by 2030, AI models could be scaled up by a factor of 10,000. Such a leap could potentially revolutionize the field, but it also raises significant questions about the path forward.

The Epoch AI paper posits that scaling to such an extent could fundamentally change how models interact with data, allowing them to develop more sophisticated internal models of the world. This idea, known as the development of “world models,” suggests that as LLMs grow, they might begin to understand the world in ways that are more akin to human cognition. This could enable breakthroughs in AI’s ability to reason, plan, and interact with humans on a deeper level.

However, not everyone in the AI community is convinced that scaling alone is the answer. “We’ve seen time and time again that more data and more parameters don’t automatically lead to more intelligent or useful models,” argues one AI critic. “What we need is better data, better training techniques, and more transparency in how these models are built and evaluated.”

This skepticism is echoed by many within the AI community. A user on the ‘AI Explained’ channel commented, “Does anybody really believe that scaling alone will push transformer-based ML up and over the final ridge before the arrival at the mythical summit that is AGI?” This sentiment reflects a broader concern that scaling might not address the fundamental limitations of current AI models.

Testing the Limits: Grok-2’s Early Performance

Given the lack of official documentation, independent AI enthusiasts and researchers have taken it upon themselves to test Grok-2’s capabilities. One such effort is the Simple Bench project, an independent benchmark designed to test the reasoning and problem-solving abilities of LLMs. The creator of Simple Bench, who runs the popular ‘AI Explained’ YouTube channel, has shared preliminary results from testing Grok-2. “Grok-2’s performance was pretty good, mostly in line with the other top models on traditional benchmarks. But it’s not just about scores—it’s about how these models handle more complex, real-world tasks,” he explained.

Simple Bench focuses on tasks that require a model to understand and navigate cause-and-effect relationships, which are often overlooked by traditional benchmarks. While Grok-2 performed well on many tasks, it still fell short in areas where Claude 3.5 Sonic excelled. This discrepancy highlights a key issue in AI development: the challenge of creating models that not only excel in controlled environments but also perform reliably in the unpredictable real world.

One commenter, reflecting on the importance of benchmarks like Simple Bench, stated, “What I like about Simple Bench is that it’s ball-busting. Too many of the recent benchmarks start off at 75-80% on the current models. A bench that last year got 80% and now gets 90% is not as interesting anymore for these kind of bleeding edge discussions on progress.” This comment underscores the need for benchmarks that challenge models to perform beyond what is easily achievable, pushing the boundaries of AI capabilities.

The Future of AI: More Than Just Bigger Models?

As the AI community grapples with the implications of Grok-2 and the broader trend of scaling models, some researchers are exploring alternative paths to advancement. One promising area is the development of models that can create and utilize internal world models. These models would go beyond surface-level pattern recognition, instead developing a deeper understanding of the world’s underlying rules and structures.

Recent experiments have shown that LLMs are beginning to develop these kinds of models, albeit in rudimentary forms. A study referenced in the Simple Bench project found that after training on large datasets, a language model was able to infer hidden relationships and predict outcomes based on incomplete information. “It’s a small step, but it’s a sign that these models are starting to move beyond simple data processing and into something more complex,” said a researcher involved in the study.

However, the path to truly intelligent AI—often referred to as Artificial General Intelligence (AGI)—is still fraught with challenges. Some experts believe that current architectures, like those used in Grok-2, may not be enough to achieve AGI, no matter how much they are scaled. Instead, they argue that a new approach, possibly involving more sophisticated data labeling techniques or even a fundamental shift in how AI models are trained, may be necessary.

One viewer of the ‘AI Explained’ channel suggested that the future of AI might not lie in larger models, but in a fundamental rethinking of how these models are trained. “We need deepfake regulation asap. We can’t count on the startup to do basic, literally basic safeguards, especially with voice cloning. Pretty straightforward to do live voice comparisons via embeddings to validate if it’s your voice. Inexpensive. Without being told too. These companies don’t care about the damage,” they noted, highlighting the ethical challenges that accompany the current trajectory of AI development.

The Ethical Implications: Real-Time Deepfakes and Beyond

As AI models like Grok-2 become more advanced, they also pose new ethical challenges. One of the most pressing concerns is the potential for these models to generate highly convincing deepfakes in real time. Already, tools like Grok-2’s image-generating sibling, Flux, and other AI platforms like Ideogram 2 are capable of creating realistic images and videos. As one AI enthusiast noted, “We’re not far from a world where you won’t be able to trust anything you see online. The line between reality and fabrication is blurring at an alarming rate.”

The potential for misuse is enormous, from spreading misinformation to manipulating public opinion. The possibility of real-time deepfakes could lead to a world where visual and auditory evidence becomes entirely unreliable. As one commenter on the ‘AI Explained’ channel observed, “We are mindlessly hurtling towards a world of noise where nothing can be trusted or makes any sense.” This dystopian vision highlights the urgent need for regulatory frameworks and technological solutions to address the risks posed by AI-generated content.

Some experts are calling for stricter regulations and the development of new technologies to help detect and counteract deepfakes. Demis Hassabis, CEO of Google DeepMind, recently pointed out, “We need to be proactive in addressing these issues. The technology is advancing quickly, and if we’re not careful, it could outpace our ability to control it.”

In response to these concerns, researchers are exploring new methods to verify the authenticity of digital content. One promising approach is the use of zero-knowledge proofs, a cryptographic technique that allows for the verification of information without revealing the information itself. This could potentially be used to create “personhood credentials” that verify the identity of individuals in digital spaces. As one viewer commented, “I have been yelling about zero knowledge proofs for years. They are absolutely required for the next phase of humanity, without exception.”

A Turning Point or Just Another Model?

The debate over Grok-2’s significance is far from settled. Some see it as a step toward a new era of AI-driven innovation, while others view it as just another model in an increasingly crowded field, marked by incremental improvements rather than groundbreaking advancements. As one skeptic on the ‘AI Explained’ channel remarked, “How can we really judge the importance of Grok-2 when there’s no transparency about how it works or what it’s truly capable of? Without that, it’s just another black box.”

Despite these reservations, the release of Grok-2 is undeniably a moment of interest, if not a turning point, in the AI landscape. The model’s capabilities—demonstrated through early benchmark performance—suggest it could play a significant role in shaping future applications of AI. However, this potential is tempered by the ongoing challenges in AI development, particularly around issues of ethics, transparency, and the limits of scaling.

Moreover, the ethical implications of models like Grok-2 cannot be overstated. As AI continues to advance, the line between reality and digital fabrication is becoming increasingly blurred, raising concerns about trust and authenticity in the digital age. The potential for real-time deepfakes, coupled with the model’s capabilities, presents both opportunities and risks that society must grapple with sooner rather than later.

Ultimately, Grok-2’s legacy will depend on how these challenges are addressed. Will the AI community find ways to harness the power of large language models while ensuring they are used responsibly? Or will Grok-2 and its successors become symbols of an era where technological advancement outpaced our ability to manage its consequences?

As we stand at this crossroads, the future of AI remains uncertain. What is clear, however, is that the development of models like Grok-2 is only the beginning. Whether it will lead us into a new era of AI-driven innovation or become just another step in the long journey toward truly intelligent machines is a question that only time—and continued research—will answer.

In the words of one AI enthusiast, “We are at the brink of something monumental, but whether it’s a breakthrough or just another iteration depends on how we proceed from here.” The journey of AI, it seems, is far from over, and Grok-2 might just be one of the many signposts along the way.

]]>
606769
Google Assistant is Old News; Move Over to Gemini Live: The New Face of Conversational AI https://www.webpronews.com/google-assistant-is-old-news-move-over-to-gemini-live-the-new-face-of-conversational-ai/ Fri, 23 Aug 2024 23:21:18 +0000 https://www.webpronews.com/?p=606766 In a world where digital assistants have become as ubiquitous as smartphones, Google has once again upped the ante with its latest innovation, Gemini Live. Launched with much fanfare at the recent “Made by Google” event, Gemini Live promises to revolutionize the way we interact with our devices, offering a conversational experience that feels almost human. But does it live up to the hype, or is it just another tech gimmick? Let’s dive deep into what Gemini Live brings to the table and explore its potential impact on the future of AI-powered personal assistants.

The Rise of Gemini Live

When Google Assistant was first introduced, it was hailed as a groundbreaking innovation. It could set timers, control smart home devices, and provide weather updates with just a simple voice command. However, as technology advanced, the expectations for digital assistants grew, and the limitations of Google Assistant became increasingly apparent. Enter Gemini Live, Google’s latest attempt to stay ahead of the curve in the rapidly evolving world of artificial intelligence.

Sissie Hsiao, Vice President and General Manager of Gemini Experiences and Google Assistant, highlighted the need for a more natural and intuitive AI interaction during the launch event. “With Gemini, we’re reimagining what it means for a personal assistant to be truly helpful,” Hsiao said. “Gemini is evolving to provide AI-powered mobile assistance that offers a new level of help, all while being more natural, conversational, and intuitive.”

What is Gemini Live?

At its core, Gemini Live is a mobile conversational experience that allows users to have free-flowing, natural conversations with their AI assistant. Unlike traditional digital assistants that require specific voice commands, Gemini Live can engage in continuous dialogue without needing to be reactivated after every question. This feature alone sets it apart from its predecessors and competitors, offering a more seamless and human-like interaction.

One of the most striking aspects of Gemini Live is its ability to understand context and provide detailed, thoughtful responses. For instance, when asked about a recent Liverpool football match, Gemini Live not only provided the score but also gave an analysis of the game’s performance. This level of depth and understanding is something that previous digital assistants have struggled to achieve.

A Crash Course on Using Gemini Live

For those new to Gemini Live, getting started is surprisingly simple. The feature comes pre-installed on the Google Pixel 9 and is also available on other devices like the Samsung Galaxy S24 Ultra and Pixel 8 Pro. To activate Gemini Live, users simply need to engage Google Assistant and select the Gemini Live option from the bottom right of the screen. From there, users are prompted to choose from 10 different voices, each with its unique tone and style.

What sets Gemini Live apart is its ability to continue conversations even when users navigate away from the Gemini Live interface. This means you can carry on a conversation with your AI assistant while using other apps on your phone, making it a truly integrated part of your mobile experience.

The Good, the Bad, and the Limitations

While Gemini Live offers a host of impressive features, it’s not without its limitations. One of the most significant drawbacks is that it requires an internet connection to function. Unlike some AI models that can perform tasks offline, Gemini Live operates entirely in the cloud. This reliance on the cloud means that if you’re without an internet connection, Gemini Live won’t be able to assist you.

Additionally, Gemini Live currently lacks access to some of the more personal features that users have come to expect from digital assistants. For example, it cannot access your calendar, emails, or messages, and it doesn’t have the ability to send text messages or make calls. As one early user pointed out, “Gemini Live is great for general information and casual conversation, but when it comes to personal tasks, it’s still playing catch-up.”

Despite these limitations, Gemini Live excels in areas where traditional digital assistants have often fallen short. Its ability to translate languages instantly and provide responses in different languages makes it a valuable tool for global users. Moreover, Gemini Live remembers past conversations, allowing users to pick up where they left off days or even weeks later. This continuity of conversation is a significant leap forward in making AI interactions feel more natural and less transactional.

Gemini Live vs. Competitors: A New Standard?

The launch of Gemini Live comes at a time when AI-powered voice assistants are becoming increasingly sophisticated. OpenAI’s ChatGPT, Apple’s Siri, and Amazon’s Alexa have all made strides in improving their conversational capabilities, but Gemini Live is setting a new standard with its real-time interaction and low latency. According to users, the seamlessness of Gemini Live’s responses makes it feel more like a conversation with a friend rather than a machine.

One user described their experience with Gemini Live as “wild,” noting how impressively the AI handled complex queries and adjusted on the fly. “It’s not just about giving you generic answers; Gemini Live seems to understand the nuance of what you’re asking and responds accordingly,” they said.

However, not everyone is convinced that Gemini Live is ready to dethrone its competitors just yet. In a recent review, tech analyst Richard Priday highlighted some of the challenges he faced during his 24-hour trial with the AI. “While Gemini Live’s conversational abilities are impressive, there were moments when it struggled with accuracy, particularly when providing directions or current news updates,” Priday noted. “It feels like a step in the right direction, but there’s still work to be done.”

The Future of Gemini Live and AI Assistants

As AI continues to evolve, the role of digital assistants like Gemini Live is likely to expand. Google is already working on deeper integrations with its suite of apps, including Gmail, YouTube, and Google Maps, which could make Gemini Live an even more indispensable tool for everyday tasks. The potential for Gemini Live to be integrated into Google’s smart home devices, such as Nest speakers, also opens up new possibilities for voice-activated control of home environments.

But perhaps the most exciting aspect of Gemini Live is its potential to redefine what we expect from AI assistants. By focusing on creating a more conversational, context-aware experience, Google is pushing the boundaries of how we interact with technology. As Hsiao aptly put it, “We’re in the early days of discovering all the ways an AI-powered assistant can be helpful, and Gemini Live is just the beginning.”

Final Thoughts

In its current form, Gemini Live is an impressive step forward in the world of AI-powered digital assistants. Its natural conversational abilities and seamless integration into the mobile experience make it a compelling option for users looking for more than just a basic voice command assistant. However, there are still areas where Gemini Live needs to improve, particularly in its ability to handle personal tasks and provide accurate real-time information.

As AI technology continues to advance, it’s likely that we’ll see even more sophisticated iterations of Gemini Live in the future. For now, though, it’s clear that Google is on the right track in its quest to create a truly helpful personal assistant. Whether you’re a tech enthusiast or just someone looking for a more natural way to interact with your phone, Gemini Live is worth exploring.

And as for the competition? They’d better watch out—because Gemini Live is here, and it’s not just an assistant; it’s a game-changer.

]]>
606766