@faoluin My main solution to avoid anger is using word filters and unfollowing people if necessary (it wasn't yet). My timeline is already well-curated, so I mostly needed to filter AI stuff. I would filter politics, but it's already rare enough in my timeline that I'm fine with it. It's similar on other platforms.
Apart from that, if I see a post or article making claims that seem off, I take a step back and think about if it makes sense, look at how people responded, and just not address/boost it until I know more. This served me well with the recent AI snail post and today.
The current climate on the internet is making me rethink how I want to keep using it. I'm trying to be more mindful and get less getting sucked into mob dynamics and the overall negativity permeating the web.
@catraxx Oh yeah, that's fine. All I'm saying is that we're approaching a future where the goal to strictly stick with projects that don't contain AI code will come at the cost of not having any (good) options altogether.
I don't think most projects will remain "pure" in terms of involvement of LLMs and it isn't feasible to chase this purity without cutting yourself off from a lot of useful things. Best to look out how it is involved instead and base your decisions on that.