I donno anything about China, but whoever made this meme certainly doesn’t know anything about the USA. The idea that “liberals” or anyone else (??) are high-fiving themselves over a credit score. lol
Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot Mniot
- 0 Posts
- 11 Comments
Mniot@programming.devto
No Stupid Questions@lemmy.world•ELI5. Limit of current gen AI/LLMsEnglish
3·28 days agoThe “agents” and “agentic” stuff works by wrapping the core innovation (the LLM) in layers of simple code and other LLMs. Let’s try to imaging building a system that can handle a request like “find where I can buy a video card today. Make a table of the sites, the available cards, their prices, and how they compare on a benchmark.” We could solve this if we had some code like
search_prompt = llm(f"make a list of google web search terms that will help answer this user's question. present the result in a json list with one item per search. <request>{user_prompt}</request>") results_index = [] for s in json.parse(search_prompt): results_index.extend(google_search(s)) results = [fetch_url(url) for url in results_index] summarized_results = [llm(f"summarize this webpage, fetching info on card prices and benchmark comparisons <page>{r}</page>") for r in results] return llm(f"answer the user's original prompt using the following context: <context>{summarized_results}</context> <request>{user_prompt}</request>")It’s pretty simple code, and LLMs can write that, so we can even have our LLM write the code that will tell the system what to do! (I’ve omitted all the work to try to make things sane in terms of sandboxing and dealing with output from the various internal LLMs).
The important thing we’ve done here is instead of one LLM that gets too much context and stops working well, we’re making a bunch of discrete LLM calls where each one has a limited context. That’s the innovation of all the “agent” stuff. There’s an old Computer Science truism that any problem can be solved by adding another layer of indirection and this is yet another instance of that.
Trying to define a “limit” for this is not something I have a good grasp on. I guess I’d say that the limit here is the same: max tokens in the context. It’s just that we can use sub-tasks to help manage context, because everything that happens inside a sub-task doesn’t impact the calling context. To trivialize things: imagine that the max context is 1 paragraph. We could try to summarize my post by summarizing each paragraph into one sentence and then summarizing the paragraph made out of those sentences. It won’t be as good as if we could stick everything into the context, but it will be much better than if we tried to stick the whole post into a window that was too small and truncated it.
Some tasks will work impressively well with this framework: web pages tend to be a TON of tokens but maybe we’re looking for very limited info in that stack, so spawning a sub-LLM to find the needle and bring it back is extremely effective. OTOH tasks that actually need a ton of context (maybe writing a book/movie/play) will perform poorly because the sub-agent for chapter 1 may describe a loaded gun but not include it in its output summary for the next agent. (But maybe there are more ways of slicing up the task that would allow this to work.)
It’s basically impossible to tell with these between the example being totally fabricated, true but only happens some small percentage of time, true and happens most of the time but you got lucky, and true and reliable but now the company has patched this specific case because it blew up online.
Mniot@programming.devto
Games@lemmy.world•The developers of PEAK, explaining how they decided on pricing for their game. English
7·2 months agoIt works against the general population, if this particular one doesn’t, don’t get too busy strutting, there is almost certainly something else that does work on you.
That is very well put! I feel like I’ve talked to so many people who see one ad that doesn’t land and say, “ads don’t work on me.”
Mniot@programming.devto
Selfhosted@lemmy.world•Yunohost, the *arr stack and VPNEnglish
41·2 months agoSet your expectations: networking is complex and the configuration you’re hoping for is particularly complex. It sounds to me like you’re looking for a split-horizon configuration where local traffic stays local but internet traffic is routed over VPN. But also you want that configuration only for specific apps.
It’s not the *arr programs that are tricky, it’s that any service you try to configure this way will be some of the hardest sysadmin work.
This is a confusing meme, because in the skit the parrot is dead when bought. It had been nailed to its perch to make it look alive and the seller told a series of funny lies. The bulk of the joke is how unreliable the pet shop clerk is.
In contrast, Twitter was obviously in bad shape (that’s why Musk was making a buy offer in the first place), the board was not interested in selling until he made a wildly over-value offer, and Musk immediately tried to back out of the sale and had to be forced to comply with the deal he’d made.
Come on. Obama’s continuation of presidential power-creep is not what enables Trump. Trump getting elected, taking over the Supreme Court, getting elected again, having all the billionaires lick his ass, etc is what enables him.
If you wanna lay it on Obama, blame him for not taking the right wing seriously enough and going after them at the local level where they had been building strength for the past 50+ years. Or for not betraying all his moderate-conservative supporters to implement some seriously progressive policy.
Mniot@programming.devto
No Stupid Questions@lemmy.world•Is it realistic to hope that lemmy grows to the size of the bigger social media platforms?English
0·7 months agoThis, but in a hopeful voice instead of sarcastic 🙂
(Being surrounded by people who think more progressively will tend to shift people’s views)
Mniot@programming.devto
memes@lemmy.world•Then they will ask why nobody wants to use their payment cardsEnglish
0·8 months agoBecause (like genAI) crypto-coin people as a general population will not shut up and it gets annoying to keep hearing the same spiel. And it’s an insulting one, about how everyone not on the Bitcoin train is a stupid loser and we’ll be kissing their ass and wishing we were them when the whole thing really rockets off. Sometimes that part isn’t entirely explicit, but I hear it in almost every pro-Bitcoin rant.
If you were part of the billionaire class, you’d be paying less. Sucker.

OK but I would hi-five those people. It’s harder to fight capitalism if you’re also fighting health problems!