> To me, that seems to be a requirement for the computing industry for a long time.
Sure, but they have a market cap of 5 trillion. It's about 10x that of AMD, which also sells similar silicon (and isn't in any distress). It's more than Apple, Google, and Microsoft - and these companies historically found ways to make more money than the vendors they buy chips from.
The problem isn't that Nvidia doesn't have good fundamentals or good products, it's that the market is expecting miracles.
In the case of Nvidia, the funny thing is that their high valuations started not with AI, but with cryptocurrencies. Just never really came down - they coasted from a silly hype cycle to a more substantive one. Ten years ago, NVDA wasn't an interesting stock at all.
Why would you hold a stock if you think it should go down? If you think the stock is valuable but in the near term should go down, why not sell and then buy in increments as it goes down?
Prices are heavily guided by sentiment. Nobody said sentiment HAS to be tied to the entity's fundamentals. GameStop stock moved due to sentiment external to the entity itself.
For whom - for Taylor Swift? The average artist experience is pretty miserable: it's harder than ever to break through because there is more competition - two or three generations who looked up to rock and pop stars and imagined that this could be a viable career.
One in a thousand talented artists will get lucky, but I suspect the ratio is historically low. Everyone else more or less needs to find another job.
There are other things that probably push artists toward the cultural mean. You're no longer trying to cater to the tastes of a wealthy patron or even a record studio executive. Now, you gotta get enough clicks on YouTube first. The surest way to do this is to look nice and do some unoffensive covers of well-known pop songs.
The tension the parent referred to is the concept of "selling out" as a bad thing.
Your comment supports this. While you may talk about how it's harder to "break through" or "get lucky" than it was, it presents both of those as good things.
There used to be other measures of success for musicians other than financial.
I have an issue with the claim that the culture is stagnating. One of the arguments is this:
> fewer and fewer of the artists and franchises own more and more of the market. Before 2000, for instance, only about 25% of top-grossing movies were prequels, sequels, spinoffs, etc. Now it’s 75%.
I think the explanation isn't a decrease in creativity as much as the fact that in the 1980s, there just weren't that many films you could make a sequel of. It's a relatively young industry. There are more films made today because the technology has gotten more accessible. The average film is probably fairly bland, but there are more weird outliers too.
The same goes for the "the internet isn't as interesting as it used to be" - there's more interesting content than before, but the volume of non-interesting stuff has grown much faster. It's now a commerce platform, not a research thing. But that doesn't mean that people aren't using the medium in creative ways.
> If you described all the current capabilities of AI to 100 experts 10 years ago, they’d likely agree that the capabilities constitute AGI.
I think that we're moving the goalposts, but we're moving them for a good reason: we're getting better at understanding the strengths and the weaknesses of the technology, and they're nothing like what we'd have guessed a decade ago.
All of our AI fiction envisioned inventing intelligence from first principles and ending up with systems that are infallible, infinitely resourceful, and capable of self-improvement - but fundamentally inhuman in how they think. Not subject to the same emotions and drives, struggling to see things our way.
Instead, we ended up with tools that basically mimic human reasoning, biases, and feelings with near-perfect fidelity. And they have read and approximately memorized every piece of knowledge we've ever created, but have no clear "knowledge takeoff path" past that point. So we have basement-dwelling turbo-nerds instead of Terminators.
This makes AGI a somewhat meaningless term. AGI in the sense that it can best most humans on knowledge tests? We already have that. AGI in the sense that you can let it loose and have it come up with meaningful things to do in its "life"? That you can give it arms and legs and watch it thrive? That's probably not coming any time soon.
To be honest, I think the main reason why films get predictable as we get older is that we've seen enough of them and it's just hard to be surprised.
I catch myself thinking that even about films / books / games that try real hard to be original. You can't surprise me with a nonlinear time loop. Oh, the protagonist is also a villain but doesn't know it? Pfft, been there, done that.
The thing about winter driving is that it's just inherently a crapshoot. Sometimes, on a nice morning commute, you hit black ice going downhill and that's that. It doesn't matter that you were going slow, you're still gonna slide and hit something.
I doubt the tech will be immune to that. So it's up to how they manage the fallout from the crashes they end up getting into.
I've been driving in January on a warm (at least in the sun) sunny day and as I went over the top of a large valley and down the other size been hit with heavy snow, same with Fog. You can't even really look at weather reports either.
Crashing after hitting black ice on a hill is a skill issue. Its like skiing, or ice skating, you still have control even though the handling is very different.
Only if you have studded winter tires that are in good condition. Throw in a sprinkling of powder and there's nothing even a professional WRC driver could do.
Another personal favorite is driving on ice with a tiny layer of sun melted water so you can also hydroplane.
It's not really though, unless you're willing to just lie and redefine "unwilling to move at absurdly slow speed for conditions so the pavement can be meticulously inspected" as a skill issue and even then you won't always be able to spot it.
I don't think the cultural difference you're describing here really exists. Maybe if you mean people from the SF Bay Area who visit Tahoe. If you go to places with real winters, people know about winter / studded tires, will often carry chains, and so on.
100% this. It's laughable how many times europeans make sweeping generalizations about the US. There are various places in the US where it snows rarely and yeah, people (including me) are clueless when it happens. And then there are people in Buffalo who are more than capable of handling the snow.
where i live we get snow for a few weeks a year. still the discipline is pretty poor with tire choice. Even here people rely too much on 4wd / AWD and neglect proper tires.
some truth yes. even where I live with plenty of winter conditions, less than the midwest, still lots of poor car and tire choices. 6K# SUvs. even in the midwest lots of huge vehicles. perhaps with better tires, but still impractical.
I don't like binary takes on this. I think the best question to ask is whether you own the output of your editing process. Why does this article exist? Does it represent your unique perspective? Is this you at your best, trying to share your insights with the world?
If yes, there's probably value in putting it out. I don't care if you used paper and ink, a text editor, a spell checker, or asked an LLM for help.
On the flip side, if anyone could've asked an LLM for the exact same text, and if you're outsourcing a critical thinking to the reader - then yeah, I think you deserve scorn. It's no different from content-farmed SEO spam.
Mind you, I'm what you'd call an old-school content creator. It would be an understatement to say I'm conflicted about gen AI. But I also feel that this is the most principled way to make demands of others: I have no problem getting angry at people for wasting my time or polluting the internet, but I don't think I can get angry at them for producing useful content the wrong way.
Exactly. If it's substantially the writer's own thoughts and/or words, who cares if they collaborated with an LLM, or autocomplete, or a spelling/grammar-checker, or a friend, or a coworker, or someone from Fiverr? This is just looking for arbitrary reasons to be upset.
If it's not substantially their own writing or ideas, then sure, they shouldn't pass it off as such and claim individual authorship. That's a different issue entirely. However, if someone just wanted to share, "I'm 50 prompts deep exploring this niche topic with GPT-5 and learned something interesting; quoted below is a response with sources that I've fact-checked against" or "I posted on /r/AskHistorians and received this fascinating response from /u/jerryseinfeld", I could respect that.
In any case, if someone is posting low-quality content, blame the author, not the tools they happened to use. OOP may as well say they only want to read blog posts written with vim and emacs users should stay off the internet.
I just don't see the point in gatekeeping. If someone has something valuable to share, they should feel free to use whatever resources they have available to maximize the value provided. If using AI makes the difference between a rambling draft riddled with grammatical and factual errors, and a more readable and information-dense post at half the length with fewer inaccuracies, use AI.
In my experience if the ai voice was immediately noticeable the writing provided nothing new and most of the time is actively wrong or trying to make itself seem important and sell me on something the owner has a stake in.
Not sure if this is true for other people but it's basically always a sign of something I end up wishing I hadn't wasted my time reading.
It isn't inherently bad by any means but it turns out it's a useful quality metric in my personal experience.
That was essentially my takeaway. The problem isn't when AI was used. It's when readers can accurately deduce that AI was used. When someone uses AI skillfully, you'll never know unless they tell you.
"but I don't think I can get angry at them for producing useful content the wrong way"
What about plagiarism? If a person hacks together a blog post that is arguably useful but they plagiarized half of it from another person, is that acceptable to you? Is it only acceptable if it's mechanized?
One of the arguments against GenAI is that the output is basically plagiarized from other sources -- that is, of course, oversimplified in the case of GenAI, but hoovering up other people's content and then producing other content based on what was "learned" from that (at scale) is what it does.
The ecological impact of GenAI tools and the practices of GenAI companies (as well as the motives behind those companies) remain the same whether one uses them a lot or a little. If a person has an objection to the ethics of GenAI then they're going to wind up with a "binary take" on it. A deal with the devil is a deal with the devil: "I just dabbled with Satan a little bit" isn't really a consolation for those who are dead-set against GenAI in its current forms.
My take on GenAI is a bit more nuanced than "deal with the devil", but not a lot more. But I also respect that there are folks even more against it than I am, and I'd agree from their perspective that any use is too much.
My personal thoughts on gen AI are complicated. A lot of my public work was vacuumed up for gen AI, and I'm not benefitting from it in any real way. But for text, I think we already lost that argument. To the average person, LLMs are too useful to reject them on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots". Mind you, it pains me to write this. I just think that ship has sailed.
I think we have a better shot at making that argument for music, visual art, etc. Most of it is utilitarian and most people don't care where it comes from, but we have a cultural heritage of recognizing handmade items as more valuable than the mass-produced stuff.
I don't think that ship has sailed as far as you suggest: There are strong proponents of LLMs/GenAI, but not IMO many more than NFTs, cryptocurrencies, and other technologies that ultimately did not hit mainstream adoption.
I don't think GenAI or LLMs are going away entirely - but I'm not convinced that they are inevitable and must be adopted, either. Then again, I'm mostly a hold-out when it comes to things like self checkout, too. I'd rather wait a bit longer in line to help ensure a human has a job than rush through self-checkout if it means some poor soul is going to be out of work.
Sadly, I agree. That's why I removed my works from the open web entirely: there is no effective way for people to protect their works from this abuse on the internet.
> To the average person, LLMs are too useful to reject them
The way LLMss are now, outside of the tech bubble the average person has no use for them.
> on some ultimately muddied arguments along the lines of "it's OK for humans to train on books, but it's not OK for robots"
This is a bizarre argument. Humans don't "train" on books, they read them. This could be for many reasons, like to learn something new or to feel an emotion. The LLM trains on the book to be able to imitate it without attribution. These activities are not comparable.
I feel like plagiarism is an appropriate analogy. Student can always argue they still learn something out of it and yada yada, and there's probably some truth in it. However, we still principally reject it in a pretty binary manner. I believe the same reason applies to LLM artifacts too, or at least spiritually.
Depletion of local water resources is a good thing to measure. But by and large, this is not what we're measuring. Instead, we're coming up with absurd statistics that imply any water put to beneficial use just disappears forever.
If your tap water comes from a river and flows back to a river, leaving it running mostly just wastes energy.
Sure, but they have a market cap of 5 trillion. It's about 10x that of AMD, which also sells similar silicon (and isn't in any distress). It's more than Apple, Google, and Microsoft - and these companies historically found ways to make more money than the vendors they buy chips from.
The problem isn't that Nvidia doesn't have good fundamentals or good products, it's that the market is expecting miracles.
In the case of Nvidia, the funny thing is that their high valuations started not with AI, but with cryptocurrencies. Just never really came down - they coasted from a silly hype cycle to a more substantive one. Ten years ago, NVDA wasn't an interesting stock at all.
reply