They’ll happily burn mountains of profits on that stuff, but not on decent wages or health insurance.
Some of them won’t even pay to replace broken office chairs for the employees they forced to RTO.
Wages or health insurance are a very known cost, with a known return. At some point the curve flattens and the return gets less and less for the money you put in. That means there is a sweet spot, but most companies don’t even want to invest that much to get to that point.
AI however, is the next new thing. It’s gonna be big, huge! There’s no telling how much profit there is to be made!
Because nobody has calculated any profits yet. Services seem to run at a loss so far.
However, everybody and their grandmother is into it, so lots of companies feel the pressure to do something with it. They fear they will no longer be relevant if they don’t.
And since nobody knows how much money there is to be made, every company is betting that it will be a lot. Where wages and insurance are a known cost/investment with a known return, AI is not, but companies are betting the return will be much bigger.
I’m curious how it will go. Either the bubble bursts or companies slowly start to realise what is happening and shift their focus to the next thing. In the latter case, we may eventually see some AI develop that is useful.
It’s a game to them that doesn’t take into consideration any human element.
It’s like the sociopathic villains in Trading Places betting a dollar on whether or not Valentine would succeed. They don’t really give a shit. It’s all for the game that might result in throwing more money on their pile.
Once again we see the Parasite Class playing unethically with the labour/wealth they have stolen from their employees.
Thank god they have their metaverse investments to fall back on. And their NFTs. And their crypto. What do you mean the tech industry has been nothing but scams for a decade?
Suppose many of the CEOs are just milking general venture capital. And those CEOs know that it’s a bubble and it’ll burst, but have a good enough way to predict when it will, thus leaving with profit. I mean, anyway, CEOs are usually not reliant upon company’s performance, so no need even to know.
Also suppose that some very good source of free\cheap computation is used for the initial hype - like, a conspiracy theory, a backdoor in most popular TCP/IP realizations making all of the Internet’s major routers work as a VM for some limited bytecode for someone who knows about that backdoor and controls two machines, talking to each other via the Internet and directly.
Then the blockchain bubble and the AI bubble would be similar in relying upon such computation (convenient for something slow in latency but endlessly parallel), and those inflating the bubbles and knowing of such a backdoor wouldn’t risk anything, and would clear the field of plenty of competition with each iteration, making fortunes via hedge funds. They would spend very little for the initial stage of mining the initial party of bitcoins (what if Satoshi were actually Bill Joy or someone like that, who could have put such a backdoor, in theory), and training the initial stages of superficially impressive LLMs.
And then all this perpetual process of bubble after bubble makes some group of people (narrow enough, if they can keep the secret constituting my conspiracy theory) richer and richer quick enough on the planetary scale to gradually own bigger and bigger percent of the world economy, indirectly, of course, while regularly cleaning the field of clueless normies.
Just a conspiracy theory, don’t treat it too seriously. But if, suppose, this were true, it would be both cartoonishly evil and cinematographically epic.
Honestly I think another part is that AI is actually pretty fascinating (or at least easy to make seem fascinating to investors lol) so when company A makes a flashy statement to investors involving AI, company B’s investors ask why company B isn’t utilizing this amazing new technology. This plays into that aspect of not wanting to get left behind.
Yes, people grew with subconscious feeling that cautionary tales of the old science fiction are the way to real power. A bit similar to ex-Soviet people being subconsciously attracted to German Nazi symbolism.
Evil is usually shown as strong, and strength is what we need IRL, to make a successful business, to fix a decaying nation, to give a depressed society something to be enthusiastic about.
They think there should be some future, looking, eh, futuristic.
The most futuristic things are those that look and function in a practical way and change people’s lives for the better. We’ve had the brilliance and entertainment of 90s and early 00s computing, then it became worse. So they have to promise something.
BTW, in architecture brutalism is coming back into fashion (in discussions and not in the real construction), perhaps we will see a similar movement for computing at some point - for simplification and egalitarianism.
Tech CEOs really should be replaced with AI, since they all behave like the seagulls from Finding Nemo and just follow the trends set out by whatever bs Elon starts
If only there was some group of people with detailed knowledge of the company, who would be informed enough to steer its direction wisely. /s
If I pinged my CEO over Slack and got back “You’re absolutely right! Let me try that again” I might actually die from crying with joy.
I’ve started using AI on my CTOs request. ChaptGPT business licence. My experience so far: it gives me working results really quick, but the devil lies in the details. It takes so much time fine tuning, debugging and refactoring, that I’m not really faster. The code works, but I would have never implemented it that way, if I had done it myself.
Looking forward for the hype dying, so I can pick up real software engineering again.
There are still employers bitching about how no one wants to work anymore. I doubt any lessons will be learned here.
it makes sense to someone like me who is not a dev but works with coding at times, I don’t get the experience to be quick with it.
- Your code will be significantly more insecure. Expect anything exposed to world+dog to be hacked far quicker than your own work.
- You will code even slower than if you just did the work yourself.
- You will fail to grow as a coder, and will even see your existing skills erode.
Yea
Vibe coding is for us armatures, who want the occasional hello world
I use it for programing home assistant, since I just can’t get my head around the YAML.
My experience with AI so far is that I have to waste more time fine tuning my prompt to get what I want and still end up with some obvious issues that I have to manually fix and the only way I would know about these issues is my prior experience which I will stop gaining if I start depending on AI too much, plus it creates unrealistic expectations from employers on execution time, it’s the worst thing that has happened to the tech industry, I hate my career now and just want to switch to any boring but stable low paying job if I don’t have to worry about going through months for a job hunt
Similar experience here. I recently took the official Google “prompting essentials” course. I kept an open mind and modest expectations; this is a tool that’s here to stay. Best to just approach it as the next Microsoft Word and see how it can add practical value.
The biggest thing I learned is that getting quality outputs will require at least a paragraph-long, thoughtful prompt and 15 minutes of iteration. If I can DIY in less than 30 minutes, the LLM is probably not worth the trouble.
I’m still trying to find use cases (I don’t code), but it often just feels like a solution in search of a problem….
Sounds like we all just wamt to retire as goat farmers. Just like before. The more things change…they say
I’ll take no shit for $500, Alex.
With how much got wasted on AI, that $500 might not be there anymore. Would you take $5?
As expected. Wait until they have to pay copyright royalties for the content they stole to train.
I hardly post on any social media besides here and I still feel violated.
5% is Nvidia.
There are not enough 💯 emoji in the world for this post.
💯
Does this mean they’ll invest the money in paying workers? No… they’re just have to double down.
I would argue we have seen return. Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses. If we could hitch this with trusted and solid LLM data, it makes a lot of things easier for many people. Translation is another.
I find it very hard to believe 95% got ZERO benefit. We’re still benefiting and it’s forcing a lot of change (in the real world). Example, more power use? More renewable energy, and even (yes safe) nuclear is expanding. Energy storage is next.
These ‘AI’ (broadly used) tools will also get better and improve the interface between physical and digital. This will become ubiquitous, and we’ll forget we couldn’t just ‘talk’ to computers so easily.
I’ll end with, I don’t say ‘AI’ is an overblown and overused and overutilized buzzword everywhere these days. I can’t say about bubbles and shit either. But what I see is a lot of smart people making LLMs and related technologies more efficient, more powerful, and is trickling into many areas of software alone. It’s easier to review code, participate, etc. Literal papers are published constantly about how they find new and better and more efficient ways to do things.
Documentation is easier. Tools for PDF, Markdown have increased in efficacy. Coding alone has lowered the barrier to bringing building blocks and some understanding to the masses.
I have seen none of these, in practice.
The documentation generated is no better than what a level 1 support rep creates, and needs to be heavily fixed before being relied on.
Pandoc still produces PDFs, Markdown, etc just as quickly as it always has.
The code produced still has the same issues as documentation: it’s shite, and not easily bug fixed due to a lack of understanding by anyone with what its actually doing. And, if you need someone who understand the code already to bugfix it, guess what? You didn’t save anyone anything.
And, all of this, only using terrawatts more electricity than before, with equivalent or worse outcomes.
OCR was more my thinking, not Pandoc. LLMs enable OCR to achieve greater accuracy through context enhancement for example.
That sounds like one of those rare appropriate use cases.
deleted by creator
Documentation is easier.
For the love of all things good and pure, do not use LLMs to make your documentation.
Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I’d like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It’s anecdotal evidence of course, but the amount of BS answers saying “AI IS GONNA DOMINATE SOON” ; “NEXT YEAR NOBODY WILL HAVE A JOB”, “THIS IS THE FUTURE” etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don’t start using “X”.
providing me with quick, straightforward recipes based on what I have at hand,
Ah yes, the wonderful recipes AI generates. Like Pizza made with glue!
https://www.businessinsider.com/google-ai-glue-pizza-i-tried-it-2024-5
You know what else generates quick, straightfoward recipes based on what I have on hand?
My brain. I open fridge, and freezer, and then decide what to make. Usually takes less than a minute to figure something out.
Not sure if I am following the sarcasm, I made it very clear I think AI’s purpose is hyperinflated, and it is a bubble as well, I was just saying it is not completely useless.
IT does give a LOT of false information, but for simple stuff it saves time, that I will not deny.
Oh, it’s not completely useless. If you need something that makes gibberish that sounds real for ad copy, I’m sure its fine for that.
And, while it may save me, a higher level person some time to produce a document… the cost for the production (Due to the electricity required, and other compute resources, which require all their own people to maintain) outstrips the time saved, when I could have handed the job to a level 1 support person, since I still need to review it and correct it for accuracy.
Excel still struggles with correct formula suggestions. Basic #REF errors when the cell above and below in the table function just fine. The ever present, this data is a formula error when there is no longer a formula in the entire column.
And searching, just like its predecessor the google algo, gives you useless suggestions if anything remotely fashionable uses the scientific name too.
“Ruh-roh, Raggy!”
It’s okay. All the people that you laid off to replace with AI are only going to charge 3x their previous rate to fix your arrogant fuck up so it shouldn’t be too bad!
Computer science degrees being the most unemployed degree right now leads me to believe this will actually suppress wages for some time
That was always one of the main goals. They’d rather light a mountain of cash on fire than give anyone a thriving wage
I charge them more than I would if I was just developing for them from scratch. I USED to actually build things, but now I’m making more money doing code reviews and telling them where they fucked up with the AI and then myself and my now small team fix it.
AI and Vibe coders have made me great money to the point where I’ve now hired 2 other developers who were unemployed for a long time due to being laid off from companies leveraging AI slop.
Don’t get me wrong, I’d love for the bubble to burst (and it will VERY soon, if it hasn’t already) and I know that after it does I can retire and hope that the two people I’ve brought on will quickly find better employment.
But it’s okay, because MY company is AHEAD OF THE CURVE on those 95% losses
How bad to you think this collapse gonna be? We gonna see a big name collapse into dust or we gonna see something akin to the Great Depression?
The AI bubble is going to be like the dot com bubble I think, but with the world being so heavily financialized it might spiral into something like 2008 or worse…
It won’t be like the dot com bubble. The AI bubble is far more corporate investment with far fewer entities having money thrown at them.
That’s true yeah, there is a lot less retail investment in those companies.
What is similar to the dot com bubble though is many “smaller” companies (i.e. not Google or Meta) are buying into AI as an investment into infrastructure for their company, just like was happening with useless websites during the dot com bubble.
Yeah, most individuals don’t have money to invest in techbros’ latest boondoggle.
Which means we’ll be paying for their bailouts instead.
We’ll see the beginning of a crash in about a year and the crash probably won’t end for 7-10 years.
We’re looking at a full scale shift in the way large scale orgs are running their businesses; and it’s a shift a lot of them will need to pivot from once they realize it’s not working.