Title of this article is total b.s (although so was flagging this on HN).
However, the tool it mentions is pretty cool and frankly I'm surprised it even works considering Twitter shut down APIs a while back. I just tried it out and it helped me connect with over 200 folks on Bluesky that I'd been following on Twitter that I didn't know had migrated. Pretty handy... while it lasts.
I have an MBP M1 Max and the only time I really feel like I need more oomph is when I'm doing live previews and/or rendering in After Effects. I find myself having to clear the cache constantly.
Other than that it cruises across all other applications. Hard to justify an upgrade purely for that one issue when everything else is so solid. But it does make the eyes wander...
There was a bunch of reporting on how AI companies and researchers were using tools that ignored robots.txt. It's a "polite request" that these companies had a strong incentive to ignore, so they did. That incentive is still there, so it is likely that some of them will continue to do so.
CommonCrawl[0] and the companies training models I'm aware of[1][2][3] all respect robots.txt for their crawling.
If we're thinking of the same reporting, it was based on a claim by TollBit (a content licensing startup) which was in turn based the fact that "Perplexity had a feature where a user could prompt a specific URL within the answer engine to summarize it". Actions performed by tools acting as a user agent (like archive.today, or webpage-to-PDF site, or a translation site) aren't crawlers and aren't what robots.txt is designed for, but either way the feature is disabled now.
These policies are much clearer than they were when last I looked, which is good. On the other hand. Perplexity appeared to ignore robots.txt as part of a search-enhanced retrieval scheme, at least as recently as June of this year. The article title is pretty unkind, but the test they used pretty clearly shows what was going on.
> The article title is pretty unkind, but the test they used pretty clearly shows what was going on.
I believe this article is around the same misunderstanding - it doesn't appear to show any evidence of their crawler, or web scraping used for training, accessing pages prohibited by robots.txt.
The EU's AI act points to the DSM directive's text and data mining exemption, allowing for commercial data mining so long as machine-readable opt-outs are respected - robots.txt is typically taken as the established standard for this.
In the US it is a suggestion (so long as Fair Use holds up) but all I've seen suggests that the major players are respecting it, and minor players tend to just use CommonCrawl which also does. Definitely possible that some slip through the cracks, but I don't think it's as useless as is being suggested.
Funny. If I can browse to it, it is public right? That is how some people's logic goes. And how OpenAI argued 2 years ago when GPT3.5/ChatGPT first started getting traction.
> Technically, robot.txt isn't enforcing anything, so it is just trust.
There's legal backing to it in the EU, as mentioned. With CommonCrawl you can just download it yourself to check. In other cases it wouldn't necessarily be as immediately obvious, but through monitoring IPs/behavior in access logs (or even prompting the LLM to see what information it has) it would be possible to catch them out if they were lying - like Perplexity were "caught out" in the mentioned case.
> Funny. If I can browse to it, it is public right? That is how some people's logic goes. And how OpenAI argued 2 years ago when GPT3.5/ChatGPT first started getting traction.
If you mean public as in the opposite of private, I think that's pretty much true by definition. Information's no longer private when you're putting it on the public Internet.
If you mean public as in public domain, I don't think that has been argued to be the case. The argument is that it's fair use (that is, the content is still under copyright, but fitting statistical models is substantially transformative/etc.)
It sincerely pleases me to see the Amiga so rightfully discussed in this article. In the 1980s, Amiga was a magical computer years ahead of so many of its peers (including the PC by miles). Sadly, the video capabilities that made it so special eventually became its Achilles heel.
>Sadly, the video capabilities that made it so special eventually became its Achilles heel.
How weird: I was browsing YouTube last night (with the SmartTube app) and somehow stumbled on a video that discussed this exact thing, basically making the case that Wolfenstein 3D killed the Amiga and discussing how the unique video capabilities it had which were great for 2D side-scrollers made it so difficult to make a FPS shooter work well on it, because apparently the Amiga didn't have direct framebuffer access the way PCs did with VGA mode 0x13.
It certainly has direct framebuffer access. But the bitplane representation where the bits of each pixel's value are spread out across multiple bytes can make certain kinds of updates very time consuming.
It didn't exactly kill it. Wolfenstein being feasible on the PC and not the Amiga, was just a symptom of stagnation. The Amiga (as a promising commercial venture!) had doom (pun intended) written all over it even before Wolfenstein. Commodore ignored the Amiga for years and years.
Edit: I just recalled something - the Amiga recquired either a TV or increasingly rare monitors with PAL/NTSC frequencies. You couldn't just walk in to a computer shop and buy an Amiga and a VGA compatible monitor. It was a flickery and low-resolution monitor or a TV. Not exactly endearing to professionals. I mean, I loved the Amiga maybe too much, it was always the underdog, but it was increasingly also the losing underdog.
A1200 and A4000 could be hooked into a VGA monitor for the flickerless experience. The caveat was that the flicker-free display modes were added on top of old ones, which meant that, while you could run Workbench and most applications on the VGA monitor, all games ran in the obsolete PAL modes your VGA display couldn’t handle. This created a market for niche dual-mode displays, which solved the problem, but were a bit pricey.
I posit though, that by the time Amiga 1200 was out, Amiga as a commercial venture was already dead in the water. The 1200 was a last ditch effort. Still loved it, of course.
I remember that there was some sort of shareware with a 40 day trial that my brother ran, but it stayed at 40 days. They had removed the clock as a cost saving measure on the A1200.
However, the tool it mentions is pretty cool and frankly I'm surprised it even works considering Twitter shut down APIs a while back. I just tried it out and it helped me connect with over 200 folks on Bluesky that I'd been following on Twitter that I didn't know had migrated. Pretty handy... while it lasts.
reply