- Profit Snack
- Posts
- 🤖 ChatGPT's $4.1B Competitor
🤖 ChatGPT's $4.1B Competitor
This is Synthetic Mind, your A.I newsletter that feels like finding extra change in between your couch cushions. It’s the small things in life.
It’s Thursday. Hang in there:
💬 ChatGPT's $4.1B Competitor
🤖 Robots Lead International Conference
📸 Google Caps Bard Features
Read Time: 4 min 17 sec
💬 ChatGPT's $4.1B Competitor
This week, an A.I chat has received a major update, and ever since its release, it’s been kicking a** and taking names.
The chat?
Claude 2 by Anthropic, created by a group of Ex-OpenAI employees and backed by Google. What a mouthful.
Introducing Claude 2! Our latest model has improved performance in coding, math and reasoning. It can produce longer responses, and is available in a new public-facing beta website at claude.ai in the US and UK.
— Anthropic (@AnthropicAI)
1:32 PM • Jul 11, 2023
Claude 2 is a big W for Anthropic who’s been struggling to keep up in the A.I Race. Now the question is, will it be enough to dethrone ChatGPT?
Here’s a comparison of the two:
As far as features go, ChatGPT still takes the lead.
Plus, ChatGPT is backed by some major companies like Microsoft, Snapchat, and is closing a 6-year deal with Shutterstock as we speak read.
Whereas Anthropic has only 2 major backers - Zoom and Google.
My take? Right now ChatGPT has nothing to worry about as long as they keep their eye on the prize and keep pushing out new updates.
**Ahem, GPT-5 anyone??
However, OpenAI WILL run into a problem if they pursue their plans for an OpenAI marketplace/ A.I assistant. This will lead to a turf war with Microsoft that OpenAI won’t win.
Just to keep our records straight, here’s our SM A.I chat scoreboard:
🤖 Robots Lead International Conference
Reporting A.I tech on the daily has introduced me to some wild ideas but I have to say, for me this is a first…
At the latest UN Geneva Conference, 9 robots led an international conference discussing robots and their plans for humanity and leadership.
(Yes, you read that right)
Humanoid robots challenged: We rule the world better! 🤖
The world's leading #AI-powered robots came together and answered questions at the United Nations' AI summit in Geneva. The robots warned that humans should be very careful when developing AI…
— Tansu YEĞEN (@TansuYegen)
1:17 PM • Jul 10, 2023
Yup, it’s official - we've witnessed the first-ever bot-lead conference and it looks like it won’t be the last.
Some of the robots represented political figures, and others were their own robot like Ameca which we talked about yesterday.
Here were some of the ‘board’ members:
The conference was mostly an experiment testing bot-human interaction. However, there were 2 points worth knowing:
The robots believed they would be better leaders than humans. Specifically, because they don’t have biases or emotions.
Robots do NOT plan to take over humanity because they are "happy” with their current situation. If I were a robot planning a rebellion, that’s what I would say too.
The UN wanted to show the potential for humans and robots to work together. There was even talk of sending A.I reps of leaders for safety and convenience.
Pretty freaky to me.
But like it or not, the use of A.I substitutes is on the rise with A.I assistants answering emails and texts, digital clones, and deepfakes.
📸 Google Caps Bard Features
As U.S. elections are coming around, the fear of deepfakes is at an all-time high.
Especially as it’s becoming easier by the day for people to create fake content via A.I tools like Midjourney and ElevenLabs (voice cloning).
Just a few weeks ago, a Midjourney user went viral on Twitter for his ultra-realistic photos of Presidents cheating on their wives (I can’t link them but they’re not hard to find).
Right after this, the CEO of Google (Sundar Pichai) announced that he would be slowing some of Bard’s features to prevent deepfakes like this.
Google says they don’t understand everything Bard says or why it makes specific decisions, making Bard dangerous and unpredictable in the wrong hands.
We’re seeing a pattern of tech giants hitting the brakes on their A.I products:
Zuckerberg stopped the release of ‘Voicebox’ because it was “too powerful” - a tool Meta has been working on and funding for months.
Microsoft placed Bing chat caps to prevent bias and misleading conversations.
Snapchat added new safety guards to its A.I chat like a minimum age requirement after the bot released 18+ info to a few kids.
It’s a new concept that profit-driven companies are willing to cap their money-making potential in fear of societal danger. It’s good news though as it’s thinking like this and everyone taking responsibility for their part that will keep A.I safe.
What'd you think of today's edition? |
Reply