How is it failing? I use it extensively to make grind tasks and research way more efficient. It’s honestly increased my workflow massively and I’m excited for gpt4.5 next month. Also it’s reached volume limit already for paid accounts hardly failing. Do you actually have real in depth experience with it or is this just another comment from someone with an opinion backed by nothing?
AI is honestly the least impressive tech buzzword I’ve seen (aside from cryptocurrency, which, looking at the code, is basically an unnecessarily overly complicated linked list. AKA something people learn in Programming 101).
AI has been around since the 90s and is nothing new. At best, it’s the equivalent of mashing together the top 10 google results and hoping for the best. Most of the time that creates a jumbled mess of misinformation.
Now, that’s all opinion based. The facts of why I believe it’s failing are:
It’s still not profitable due to the insane amount of computing power required to run their bulky software.
OpenAI CEO just got canned and reinstated for reasons we’re not being told. Big sign that something is off behind the scenes and the people in charge know it’s not going to work.
OpenAI has openly stated that they want people to stop using their product because they make more money when less people use it.
There’s no economy around it that benefits anyone but themselves in the long run. This will keep government inspired to shut them down.
In order for OpenAI to be successful, I’d say they need to start paying people for the content they blatantly steal off the internet. I can see Bard being successful in the long run because they credit their sources and have a better opportunity to create an economy around their AI that can benefit the sources that they’re getting their information from. (Even then, Bard is just Google with extra steps.) OpenAI, on the other hand, is openly committing plagiarism and believes they can get away with it by constant denial. Might work for them in the short term but long term they’re fucked and it’s their own fault for trying to get away with consistently stealing other people’s work for profit. That’s what makes me incredibly happy to see them fail. They deserve to fail and will get what’s coming to them.
- the term AI has been around since the 90’s which just referred to complex algorithms that performed a specific task , not really something considered true AI in any shape or form. It had very little ability’s to do things within context.
It’s still not profitable due to the insane amount of computing power required to run their bulky software.
- While this is currently true, this is the same setup most tech companies or major companies of any sort follow , their launch to profitability timeline is usually YEARS. OpenAi went from 28ml in rev their first year to over a billion in rev this year. Its in its infancy.
OpenAI CEO just got canned and reinstated for reasons we’re not being told. Big sign that something is off behind the scenes and the people in charge know it’s not going to work.
- Part of this has to do with Open AI’s CEO wanting go keep it as a Non- Profit org, and the board of directors wanting to turn a profit as soon as possible. Difference in beliefs of product use case etc from what ive read which is publicly available. Again look at Tesla, same thing they chopped and changed people and went through massive turmoil and product that wasnt making any form of product, most of their cars were sold at an astronomical loss early on while production technologies and systems were refined to bring it down to within tolerance.
“A year after Musk left, OpenAI created a for-profit arm. Technically, it is what’s known as a “capped profit” entity, which means investors’ possible profits are capped at a certain amount. Any remaining money is re-invested in the company.
Yet the nonprofit’s board and mission still governed the company, creating two competing tribes within OpenAI: adherents to the serve-humanity-and-not-shareholders credo and those who subscribed to the more traditional Silicon Valley modus operandi of using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter.”
OpenAI has openly stated that they want people to stop using their product because they make more money when less people use it.
- Interesting can you share a source where Open Ai states it wants people to stop using its product?
There’s no economy around it that benefits anyone but themselves in the long run. This will keep government inspired to shut them down.
- Known customers of OpenAI include Jane Street, a Wall Street trading firm, Morgan Stanley, Zoom, Wix, Notion, Stripe, Duolingo, Databricks, and others. Other large enterprises such as Ikea, Volvo and Coca-Cola get access to OpenAI technology through Microsoft Azure.
There is clearly a market for it and if you follow anything happening in the tech industry you will see theres been a massive push into AI for Tons of companies. Not just for optimizing their offerings , but also for increasing their productivity, training and internal systems. Every company using generic chat bots has already started the shift and now with voice recognition being implemented that understands context phone support systems will most likely be next.
To add to this, reaching volume limit for paid customers is not a good thing when the product is that generic. Just means there’s a supply and demand issue and it’s easier for competitors to offer competing products.
- There is a supply and demand issue in the right direction, far more demand than supply right now. The fact that companies are relying on Open AI’s Backbone to run their systems is what’s causing a huge growing pain which like all things tech will be figured out. Companies are already working on custom trained locally run systems for this. We are in the very beginnings of a new way of doing things across hundreds of industries.
Statements like " Bard is just Google with extra steps" makes me think you have very little understanding of the complexity difference and little appreciation for the nuances of context based data vs raw data. Either way succeed or fail, I am excited for the new opportunities and growth options it provides.
Yea, and they are all written in a very similar sort of way. Ai?
It’s quickly becoming extremely obvious that AI is not the future that tech bros claimed it was. And I couldn’t be happier about it failing.
How is it failing? I use it extensively to make grind tasks and research way more efficient. It’s honestly increased my workflow massively and I’m excited for gpt4.5 next month. Also it’s reached volume limit already for paid accounts hardly failing. Do you actually have real in depth experience with it or is this just another comment from someone with an opinion backed by nothing?
AI is honestly the least impressive tech buzzword I’ve seen (aside from cryptocurrency, which, looking at the code, is basically an unnecessarily overly complicated linked list. AKA something people learn in Programming 101).
AI has been around since the 90s and is nothing new. At best, it’s the equivalent of mashing together the top 10 google results and hoping for the best. Most of the time that creates a jumbled mess of misinformation.
Now, that’s all opinion based. The facts of why I believe it’s failing are:
It’s still not profitable due to the insane amount of computing power required to run their bulky software.
OpenAI CEO just got canned and reinstated for reasons we’re not being told. Big sign that something is off behind the scenes and the people in charge know it’s not going to work.
OpenAI has openly stated that they want people to stop using their product because they make more money when less people use it.
There’s no economy around it that benefits anyone but themselves in the long run. This will keep government inspired to shut them down.
In order for OpenAI to be successful, I’d say they need to start paying people for the content they blatantly steal off the internet. I can see Bard being successful in the long run because they credit their sources and have a better opportunity to create an economy around their AI that can benefit the sources that they’re getting their information from. (Even then, Bard is just Google with extra steps.) OpenAI, on the other hand, is openly committing plagiarism and believes they can get away with it by constant denial. Might work for them in the short term but long term they’re fucked and it’s their own fault for trying to get away with consistently stealing other people’s work for profit. That’s what makes me incredibly happy to see them fail. They deserve to fail and will get what’s coming to them.
TLDR: use Bard.
AI has been around since the 90s
- the term AI has been around since the 90’s which just referred to complex algorithms that performed a specific task , not really something considered true AI in any shape or form. It had very little ability’s to do things within context.
It’s still not profitable due to the insane amount of computing power required to run their bulky software.
- While this is currently true, this is the same setup most tech companies or major companies of any sort follow , their launch to profitability timeline is usually YEARS. OpenAi went from 28ml in rev their first year to over a billion in rev this year. Its in its infancy.
OpenAI CEO just got canned and reinstated for reasons we’re not being told. Big sign that something is off behind the scenes and the people in charge know it’s not going to work.
- Part of this has to do with Open AI’s CEO wanting go keep it as a Non- Profit org, and the board of directors wanting to turn a profit as soon as possible. Difference in beliefs of product use case etc from what ive read which is publicly available. Again look at Tesla, same thing they chopped and changed people and went through massive turmoil and product that wasnt making any form of product, most of their cars were sold at an astronomical loss early on while production technologies and systems were refined to bring it down to within tolerance.
“A year after Musk left, OpenAI created a for-profit arm. Technically, it is what’s known as a “capped profit” entity, which means investors’ possible profits are capped at a certain amount. Any remaining money is re-invested in the company.
Yet the nonprofit’s board and mission still governed the company, creating two competing tribes within OpenAI: adherents to the serve-humanity-and-not-shareholders credo and those who subscribed to the more traditional Silicon Valley modus operandi of using investor money to release consumer products into the world as rapidly as possible in hopes of cornering a market and becoming an industry pacesetter.”
OpenAI has openly stated that they want people to stop using their product because they make more money when less people use it.
- Interesting can you share a source where Open Ai states it wants people to stop using its product?
There’s no economy around it that benefits anyone but themselves in the long run. This will keep government inspired to shut them down.
- Known customers of OpenAI include Jane Street, a Wall Street trading firm, Morgan Stanley, Zoom, Wix, Notion, Stripe, Duolingo, Databricks, and others. Other large enterprises such as Ikea, Volvo and Coca-Cola get access to OpenAI technology through Microsoft Azure.
There is clearly a market for it and if you follow anything happening in the tech industry you will see theres been a massive push into AI for Tons of companies. Not just for optimizing their offerings , but also for increasing their productivity, training and internal systems. Every company using generic chat bots has already started the shift and now with voice recognition being implemented that understands context phone support systems will most likely be next.
To add to this, reaching volume limit for paid customers is not a good thing when the product is that generic. Just means there’s a supply and demand issue and it’s easier for competitors to offer competing products.
- There is a supply and demand issue in the right direction, far more demand than supply right now. The fact that companies are relying on Open AI’s Backbone to run their systems is what’s causing a huge growing pain which like all things tech will be figured out. Companies are already working on custom trained locally run systems for this. We are in the very beginnings of a new way of doing things across hundreds of industries.
Statements like " Bard is just Google with extra steps" makes me think you have very little understanding of the complexity difference and little appreciation for the nuances of context based data vs raw data. Either way succeed or fail, I am excited for the new opportunities and growth options it provides.
Lol, you think it’s failing? It’s only getting started, clearly.