After just spending 15 minutes trying to get something useful accomplished, anything useful at all, with latest beta Apple Intelligence with a M1 iPad Pro (16G RAM), this article appealed to me!
I have been running the 32B parameters qwen2.5-coder model on my 32G M2 Mac and and it is a huge help with coding.
The llama3.3-vision model does a great job processing screen shots. Small models like smollm2:latest can process a lot of text locally, very fast.
Open source front ends like Open WebUI are improving rapidly.
All the tools are lining up for do it yourself local AI.
The only commercial vendor right now that I think is doing a fairly good job at an integrated AI workflow is Google. Last month I had all my email directed to my gmail account, and the Gemini Advanced web app did a really good job integrating email, calendar, and google docs. Job well done. That said, I am back to using ProtonMail and trying to build local AIs for my workflows.
I am writing a book on the topic of local, personal, and private AIs.
I wrote a script to queue and manage running llama vision on all my images and writing the results to an sqlite db used by my Media Viewer, and now I can do text or vector search on it. It's cool to not have to rely on Apple or Google to index my images and obfuscate how they're doing it from me. Next I'm going to work on a pipeline for doing more complex things like multiple frames in a video, doing multiple passes with llama vision or other models to separate out the OCR, description, and object, people recognition. Eventually I want to feed all of this in here https://lowkeyviewer.com/ and have the ability to manually curate the automated classifications and text.
I'm curious why you find descriptions of images useful for searching. I developed a similar flow and ended up embedding keywords into the image metadata instead. It makes them easily searchable and not tied to any databases, and it is faster (dealing with tens of thousands of images personally).
It's not as good as tags but it does pretty ok for now especially since searching for specific text in an image is something I want to do a lot. I'm trying to work on getting llama to output according to a user defined tagging vocabulary/taxonomy and ideally learn from manual classifications. Kind of a work in progress there.
This is the prompt I've been using.
"Create a structured list of all of the people and things in the image and their main properties. Include a section transcribing any text. Include a section describing if the image is a photo, comic, art, or screenshot. Do not try to interpret, infer, or give subjective opinions. Only give direct, literal, objective descriptions of what you see."
> I'm trying to work on getting llama to output according to a user defined tagging vocabulary/taxonomy and ideally learn from manual classifications. Kind of a work in progress there.
Good luck with that. The only thing that I found that works is using gbnf to force it, which slows inference down considerably.
I can't speak to the OPs decision, but I also have a similar script set up that adds a combination of YOLO, bakllava, tesseract etc. and also puts it along with a URI reference to the image file into a database.
I actually store the data in the EXIF as well, but the nice thing about having a database is that it's significantly faster than attempting to search hundreds of thousands of images across a nested file structure, particularly since I store a great deal of media on a NAS.
Another thought: OpenAI has done a good enough job productizing ChatGPT with advanced voice mode and now also integrated web search. I don’t know if I would trust OpenAI with access to my Apple iCloud data, Google data, my private GitHub repositories, etc., but given their history of effective productization, they could be a multi-OS/platform contender.
Still, I would really prefer everything running under my own control.
I would interpret that as somebody working on ML algorithms and architectures, not somebody developing a product that uses some form of AI at runtime...
correct - but "developing a product that uses some form of AI at runtime" is a job that people do and the community has consolidated around "AI engineer" as shorthand https://www.latent.space/p/ai-engineer
i argue that its useful to have these shorthands to quickly understand what people do. its not so nice to just be default suspicious at a perhaps lower-technical-difficulty job that nevertheless a lot of comapnies want and a lot of people do
But we think it is, due to our learned industrious—which is the opposite of learned helplessness. We associate difficulty with reward. And we feel discomfort when really valuable things become very easy. Strange times.
I haven't seen this sort of work called AI Developer yet, but I may have missed the trend shift. Isn't the convention for this still to use the title of Machine Learning Engineer (MLE)?
There is another thread here that seems to confirm it's me who has missed a trend shift. I didn't know this term had a special meaning. I just followed the thought that an x-developer is somebody developing x.
Are you saying that true Data Engineers typically do more than just use Tableau or run OLAP queries, or do you see the title 'Data Engineer' itself as a bit of a red flag these days? I’m pretty early in my career and was leaning toward Data Engineering, but hearing stuff like this makes me wonder if going for SWE might be smarter.
Yep, that's where I am at - the amount of times people have talked to me about the most recent podcast where they've heard how a new tool will solve all their problems and really its some samey mix of CDC and bad python+sql is... a lot.
I think there's not a ton of political palatability for realizing most of their projects are like one API and a sane use of some SQL away.
For starters, Engineer only makes sense if the person actually holds an engineering degree, taken at an institution validated by the Engineering Order.
That's a legalism that isn't universal. Personally, I think that anyone who engages in engineering is logically an engineer. Maybe not a certified engineer, but an engineer nonetheless.
"The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property"[1]
I feel the same way about “UX Designer/Engineer”. Seems to mean someone who can put a wireframe together but has no design chops. Any good designer that has experience crafting user interfaces should be skilled at thinking through in-depth about what the user is doing when using the product and how to successfully guide them through that process.
You would be surprised how many people have no idea what "machine learning" means (not even technical definition but just as in the field). I'm working on a PhD in an adjacent field and basically have to tell people I work in AI for them to have any semblance of what I do.
I don't think telling people that have no idea what machine learning is that you work with 'AI' is giving them any semblance of understanding what you do.
No, it's not crazy at all. Back in the early noughts everyone was associating themselves with Internet and WorldWideWeb (yes, they used to spell that out). Same thing happening today with AI. It is irritating ...
Well, most are called "project manager" now. But it would still be a giant red flag, just like the project manager job title or even worse, using PM so you don't know exactly what it means.
Of the great developers I have worked with in real life, across a large number of projects and workplaces, very few have any Github presence. Most don't even have LinkedIn. They usually don't have any online presence at all: No blog with regular updates. No Twitter presence full of hot takes.
Sometimes this industry is a lot like the "finance" industry: People struggling for credibility talk about it constantly, everywhere. They flex and bloviate and look for surrogates for accomplishments wherever they can be found. Peacocking on github, writing yet another tutorial on what tokens are and how embeddings work, etc.
That obviously doesn't mean in all cases, and there are loads of stellar talents that have a strong online presence. But by itself it is close to meaningless, and my experience is that it is usually a negative indicator.
I think the actual truth is somewhere halfway. Hard to not have any GitHub presence if you use enough open source projects, since you can't even report bugs to many of them without an account.
But if you mostly mean in the sense that they don't have a fancy GitHub profile with a ton of followers... I agree, that does seem to be the case.
LinkedIn on the other hand... I sincerely can't imagine networking on LinkedIn being a fraction as useful as networking... Well, at work. For anyone that has a decent enough resume LinkedIn is basically only useful if you want a gutter trash news feed and to be spammed by dubious job offers 24/7. Could be wrong, but I'm starting to think you should really only stoop to LinkedIn in moments of desperation...
coding since the ‘90’s - do not have a github account at all that I ever used for anything. of the top say 10 people I have ever worked with not a single one has a github account.
there is an entire industry of devs who work on meaningful projects, independently or for an employer who solve amazing problems and do amazing sh*t none of which is public or will ever be public.
I have signed more NDAs in my career than I have commits in public repos :)
I'm younger, but not new to programming (started in the early 2000s.) Mentality-wise, I like working on software because I like solving problems, but I only really do a day job to make a living, not to scratch an itch, so I continue to work on software outside of work hours. I basically don't care what happens once my software reaches the Internet. I publish some open source software, but not as a portfolio; just for the sake of anyone who might find it useful. I don't really advertise my projects, certainly don't go out of my way to do so.
I basically have GitHub for roughly two reasons:
- I use a wide variety of open source software. I like to fix my own problems and report issues. Some projects require GitHub for this.
- May as well take advantage of the free storage and compute, since everyone else is.
Honestly, my GitHub is probably a pretty bad portfolio. It is not a curation of my best works, and I disabled as much of the "social" features as possible. It's just code that I write, which people may take or leave.
Earlier in my career maybe I cared more, but I honestly think it's because I thought you were supposed to care.
I have worked with some stellar programmers from many different places, probably more than I really deserved to. I don't know for sure if any of them didn't have GitHub accounts, but some of them definitely did have GitHub accounts, barren as they may have been.
In the same boat, the problem is you get stuck in a few companies due to the small circles of people who know what you can really do. It really limits your career options. Doesn’t matter if you regularly do in a week what a few hundred engineers cannot do in a year if no one knows it. Your get stuck in a big corp as they will pay you just enough to keep, nothing close to your worth as with no brand your not going to find better anywhere else
> I have signed more NDAs in my career than I have commits in public repos :)
Someone like you is extremely experienced and skilled, and has a reputation in your industry. You started working before it was normal and trivial to build stuff in public. Such activities were even frowned upon if I recall correctly (a friend got fired merely for emailing a dll to a friend to debug a crash; another was reprimanded for working on his own projects after hours at the office, even though he never intended to ever sell them).
That you have a reputation means posting work publicly would be of little to no value to your reputation. All the people who need to know you already do or know someone reputable who does.
As a hiring manager LinkedIn profiles are a huge window into someone's professional side, just like github may be a huge window into their technical side.
As a candidate I think LinkedIn is absolute trash - and toxic with all the ghost jobs and dark patterns. Feels like everyone else has the same opinion, or at the very least it is a common one. But when I have 50 candidates to review for 5 open positions, LinkedIn is going to give me great insight that I cannot really get anywhere else. I keep my profile current in the hopes that I'm not doing it for LinkedIn, but for the person researching me to see if I am a valid candidate for their role.
Eh, I've found LinkedIn moderately useful as a way to apply for jobs and share my CV. A few application forms have a field for it too.
Still have no idea how anyone can even hope to use the news feed there though. It's just a seemingly random torrent of garbage 24/7, with the odds of you getting any traction being virtually non existent.
<< I sincerely can't imagine networking on LinkedIn being a fraction as useful as networking.
I have removed my profile some time ago. While I agree with you in general, I have to say that initially LinkedIn was actually not bad and I did get some value out of it. It is a little harder now to not have it, because initial interviews scoff at its absence ( it apparently sends a negative signal to HR people ), but an established network of colleagues can overcome that.
I guess it is a little easier for people already established in their career. I still am kinda debating some sort of replacement for it, but I am honestly starting to wonder if github is just enough to stop HR from complaining.
Well said! Have you seen the titles and summary of those LinkedIn profiles? Not an ounce of humility. I'm afraid it's only going to get worse with "AI".
> It's really not that difficult to show what you've accomplished if you claim to be in a field.
Actually it is incredibly difficult, because you no longer have access to your previous employers' code bases and even if you do, it is illegal for you to show it to anyone.
> Actually it is incredibly difficult, because you no longer have access to your previous employers' code bases.
So the person never does anything outside of his employer's IP? That's unfortunate, but as a heuristic, I'd like to see stuff that the person has done if they claim to be in a field.
Perhaps other people don't care, and will be convinced by expertise without evidence, but I'm busy, and my life is hard enough already: show me your code or gtfo. :-)
It takes a special someone to work 40-50 hrs per week, writing hard creative software, then go home and write hard creative software in a different domain, while also balancing family/life.
Also, unless you are in CA many companies have extensive IP assignment clauses, which makes moonlighting on other projects potentially questionable.(especially if they are assholes)
My previous job made it hard to even submit bugs/fixes to open source projects we used internally. Often we just forked b/c bureaucracy (there's a reason it was my previous job)
Not saying your wrong, seeing someone's code is nice. As long as you are aware that you are leaving alot on the table by excluding those that do not have a presence. (Particularly older with kids)
Newsflash: the majority of working, paid developers do not do any programming outside of their employer's IP.
Someone who worked on successful projects that shipped and are still out there, they can point you to that. You can buy the app, or device with their embedded code, or use the website or whatever. Not always an option for everyone, or not all the time.
That's one reason why there are skill tests in interviews. And why people ask for, and contact, references.
Public code can't be trusted. If you make that the yardstick for hiring people, then everyone and his dog will spin up some phony github repo with stuff that can't be confirmed to have been written by them. Goodhart's Law and all that.
You have no idea how much help someone had with the code in their github repo, or to what extent it is cribbed from somewhere else. Enter AI into the picture now, too.
When assessing a candidate that didn't come with a reliable recommendation or similar short circuiting I spend a short time chatting to learn a little about their personality, then I ask for code, suggesting they show a public repo where they keep some of their personal stuff.
If they can't I give the option to write some code that they like and they think shows off what they can do, usually suggesting to spend half an hour or a couple of hours on it.
To me it's an obvious red flag if there is nothing. It's as if talking to a graphics designer or photographer and they're like "no, sorry, I can't show a portfolio, I've only done secretive proprietary work and have never done anything for fun or practice".
Those that show me something get to talk about it with me. A casual chat about style, function, possible improvements and so on. Usually when they have nothing to show they also don't put in that half an hour to make something up, or get back with excuses about lack of inspiration or whatever, and that's where I say "thanks, but no thanks, good luck".
If you can't easily initiate a git repo and whip something up and send it to me in half an hour you won't be a good fit. It means you aren't fluent and lack in experience. I might consider internship, or a similar position where you mainly do something else, perhaps you're good at managing Linux servers or something but want to learn how to design programs and develop software as well.
> The author can either explain it or doesn't understand it.
I've never been challenged to explain any of the code my CV points to. I could have made it all up. If they randomly asked about something I have not looked at in a long a while, it could actually look like I don't know it! There is so much of it that I would have to study the code as a full time activity to be able to fluently spout off about any random sample of it.
I think I'm going to rearrange my resume to move the free software stuff farther down and maybe shorten it. It could come across as a negative.
Some hiring people genuinely don't understand why someone would put in a 40 hour week and then do some more on evenings and weekends. Well, I don't know how many. In my career I heard something along those lines from around two. Not everyone will tell you.
> You left college or high-school and walked straight into a job then learned to code there, or what?
It doesn't describe me, but does almost everyone I've ever known in the field (other than distant strangers met through online free-software-related channels, who are an obviously biased sample).
If you hire an accountant, do you expect to see the books of their other clients? When you choose a doctor, do you expect to see the charts of their prior patients?
And frankly, when you hire a manager or executive, there's not generally a single artifact that you could use to examine their value-add. You can see perhaps the trajectory of a company or a division or the size of their team over time, but you can't see the pile of ongoing decisions and conversations that produce actual value.
I think the flip side regarding code is, the stuff I do for fun outside of my employer's IP is not at all representative of what I do at work, or how. I pick topics that interest me, work in languages that my company doesn't use, etc, and because my purpose is learning and exploration, often it doesn't end up as a finished, working, valuable piece of tech. I deliberately don't do anything too close to my actual work both b/c that just feels like working longer and because I'm concerned it would make ownership of the code a bit fuzzy, and perhaps it would be inappropriate to consider open sourcing. Because my side projects are eclectic and didactic, I rarely put it in a public repo -- but it has served its purpose of teaching me something. If I shared all of my code side projects, they would show an unfocused person dabbling in a range of areas and not shipping anything useful, because that's what's fun ... whereas at work, I am focused in a few quite narrow areas, and working on customer-facing features, because the point is to build what the company needs rather than what I enjoy building.
If I'm shopping for an accountant I will present them with one or two cases and see how they would reason about them. It's not as easy to do with a physician.
The main difference between those professions and people who build software for a living is that they have organisations that predate modernity that keep tabs on their members and kick them out if they misbehave.
We should absolutely build such organisations, but there will be intense confrontations with industry and academia when we try, because capitalists hate when other people unionise and academics think they're the best at evaluating peers.
It's fine that your personal projects aren't polished products. They show your interests and some of how you adapt to and solve problems. It's something you've chosen to do because you wanted to, and not something you did because you were pressured by profit. The everyday grind at work wouldn't show what you'd do under more exceptional circumstances, which is where your personal character and abilities might actually matter, but what you do for fun or personal development likely does.
Even if an ML/AI/software engineer has a public GH with projects on it, there's no strong reason to expect it will be a useful signal about their expertise.
> Even if an ML/AI/software engineer has a public GH with projects on it, there's no strong reason to expect it will be a useful signal about their expertise.
That's only true if you don't know how to read code. I simply read their code and based on my level of expertise, I can determine if someone is at least at my level or if they are incompetent.
I can't tell if you're deliberately ignoring the point: people's public hobby projects may not be _about_ their area of expertise. They may be side projects specifically because they are outside of their main area. It isn't about ability to read code. It's about the difference between what you know well enough to build a career in (and produce non-public work) and what's interesting enough that you'll build it and give it away for fun. They may have very little to do with one another, even if both are expressed in code.
> I can't tell if you're deliberately ignoring the point: people's public hobby projects may not be _about_ their area of expertise.
I suspect it's you who's ignoring the entire conversation.
I initially replied saying that I don't trust someone who says they are an AI engineer yet have no code to enthusiastically show publicly. I used "show your GitHub" as a euphemism for "show your work".
> It's about the difference between what you know well enough to build a career in (and produce non-public work) and what's interesting enough that you'll build it and give it away for fun. They may have very little to do with one another, even if both are expressed in code.
This statement just tells me you don't know how to read code or haven't read much code. Code is just an expression of how an individual solves technical problems using algorithms, which are just a sequence of steps the CPU runs. There's no magical specificity in a field that prevents programming from being a transferrable knowable skill for the reader or the writer.
Or perhaps you're just being hard-headed and choosing to miss my point because you really want to oppose something someone said. I don't know.
>Saying "GitHub" is just a way of saying: "Show me what you've accomplished."
Do you actually think all development happens in public GitHub repos? Do you even think a majority does? Even a strong minority?
Across a number of enormous, well-known projects I've worked on, covering many thousands of contributors, including several very well known names, 0% of it exists in public Github repos. The overwhelming bulk of development is happening in the shadows.
If your "field" is "open source software", then sure. But if you're confused into thinking Github -- at least the tiny fraction that you can see -- is "the field" of software development or even just generally providing solutions, I can understand your weird belief about this.
Of course it does, your resume is a critical part of selling your skills & services to employers. Want to close faster and for more $$$? Demonstrate your value prop in the terms they know and care about.
Agree that the use of "AI engineers" is confusing. Think this blog should use the term "engineering software with AI-integration" which is different from "AI engineering" (creating/designing AI models) and different from "engineering with AI" (using AI to assist in engineering)
The term AI engineer is now pretty well recognised in the field (https://www.latent.space/p/ai-engineer), and is very much not the same as an AI researcher (which would be involved in training and building new models). I'd expect an AI engineer to be primarily a software developer, but with an excellent understanding of how to implement, use and evaluate LLMs in a production environment, including skills like evaluation and fine-tuning. This is not some dataset you can just bundle in software developer.
You find issues when they surface during your actual use case (and by "smoke testing" around your real-world use case). You can often "fix" issues in the base model with additional training (supervised fine-tuning, reinforcement learning w/ DPO, etc).
There's a lot of tooling out there making this accessible to someone with a solid full-stack engineering background.
Training an LLM from scratch is a different beast, but that knowledge honestly isn't too practical for everyday engineers given even if you had the knowledge you wouldn't necessarily have the resources necessary to train a competitive model. Of course you could command a high salary working for the orgs who do have these resources! One caveat is there are orgs doing serious post-training even with unsupervised techniques to take a base model and reeaaaaaally bake in domain-specific knowledge/context. Honestly I wonder if even that is unaccessible to pull off. You get a lot of wiggle-room and margin for error when post-training a well-built base model because of transfer learning.
I feel like I see this comment fairly often these days, but nonetheless, perhaps we need to keep making it - the AI generated image there is so poor, and so off-putting. Does anyone like them? I am turned off whenever I see someone has used one on a post, with very few exceptions.
Is it just me? Why are people using them? I feel like objectively they look like fake garbage, but obviously that must be my subjective biases, because people keep using them.
Some people have no taste, and lack the mental tools to recognize the flaws and shortcomings of GANN output. People who enthuse about the astoundingly enthralling literary skills of LLMs tend to be the kind of person who hasn't read many books. These are sad cases: an undeveloped palate confusing green food coloring and xylitol for a bite of an apple.
Some people can recognize these shortcomings and simply don't care. They are fundamentally nihilists for whom quantity itself is the only important quality.
Either way, these hero images are a convenient cue to stop reading: nothing of value will be found below.
In a world where anyone can ask an LLM to gish-gallop a plausible facsimile of whatever argument they want in seconds, it is simply untenable to give any piece of writing you stumble upon online the benefit of the doubt; you will drown in counterfeit prose.
The faintest hint that the author of a piece is a "GenAI" enthusiast (which in this case is already clear from the title) is immediate grounds for dismissing it; "the cover" clearly communicates the quality one might expect in the book. Using slop for hero images tells me that the author doesn't respect my time.
I just don't understand how he didn't take 10 seconds to review the image before attaching it. If the image is emblematic of the power of AI, I wouldn't have a lot of faith in the aforementioned company.
If you're going to use GenAI (stable diffusion, flux) to generate an image, at least take the time to learn some basic photobashing skills, inpainting, etc.
A trend I see in the "frothiest" parts of the AI world is an inattention to details and overwhelming excitement about things just over the horizon. LLMs are clearly a huge deal and will be changing a lot of things and also there are a bunch of folks wielding it like a blunt object without the discernment to notice that they're slightly off. I'm looking forward to the next couple of decades but I'm worried about the next five years.
You aren't exaggerating! There are some creepy arms in that image, along with the other weirdness. I'm surprised Karpathy of all people used such a poor quality image for such a post.
I don't find the image poor, but somehow I see immediately that it is generated because of the stylistic style. And that simply triggers the 'fake' flag in the back of my head, which has this bad subjective connotation. But objectively I believe it is a very nice picture.
I think AI images can be very nice, I like to use them myself. I don't use images I don't personally like very much. So if you don't like them, it is not because, AI, it is because your taste and my taste don't match. Or maybe you would like them, if you didn't have a bias against AI. What I love about AI images is that you can often generate very much the thing you want. The only better alternative would be to hire an actual human to do that work, and the difference in price here is huge, of course.
It is like standing in front of a Zara, and wondering why people are in that shop, and not in the Versace shop across town. Surely, if you cannot afford Versace, you rather walk naked?
Large Language Models (LLMs) don’t fully grasp logic or mathematics, do they? They generate lines of code that appear to fit together well, which is effective for simple scripts. However, when it comes to larger or more complex languages or projects, they (in my experience) often fall short.
But humans aren’t either. We have to install programs in people for even basic mathematical and analytical tasks. This takes about 12 years, and is pretty ineffective.
No we don't. Humans were capable of mathematics and analytical tasks long before the establishment of modern twelve year education. We don't require "installing" a program to do basic mathematics, as we would never have figured out basic agriculture or developed civilization or seafaring if that were the case. I mean, Eratosthenes worked out the circumference of the Earth to a reasonable degree of accuracy in the third century BC. Even primitive hunter gather societies had concepts of counting, grouping and sequence that are beyond LLMs.
If humans were as bad as LLMs at basic math and logic, we would consider them developmentally challenged. Yet this constant insistence that humans are categorically worse than, or at best no better than, LLMs persists. It's a weird, almost religious belief in the superiority of the machine even in spite of obvious evidence to the contrary.
Normally, I'd just dismiss this as ChatGPT being trained not to disagree, but as a non-native speaker this has me doubting myself: Is prompt really pronounced rompt?
It feels like it can't possible be true, but on the other hand, I'm probably due for having my confidence in my English completely shattered again by learning another weird word's real pronunciation, so maybe this is it.
I'm a native english speaker, and prompt is pronounced "promt" ( /prɒmt/ in my roughly General American accent). Ie, there is a silent "p", but it's the second one, not the first.
In my Ohio dialect (which is supposedly standard), I pronounce both p's, but the second one is only half-pronounced. If I pronounce it quickly, there isn't the slight puff of air between the two, so the sound has the beginning sound as a first bit of 'p' and the ending sound of the last part of 't'. This sounds very similar to /promt/, but is not actually the same.
I am not a native English speaker either, but I am fairly certain it's pronounced "promt", with the second "p", the one between the "m" and the "t", merging into those sounds to the point of being inaudible itself.
Also, I too asked ChatGPT and it told me that In the word "prompt," there are no silent P's. All the letters in "prompt" are pronounced, including the P.
You might say this is about Helix being small and trying to break into a crowded market, but OpenAI and Google offered similar contests / offers that asked users to submit ideas for LLM applications. Considering how many LLM sample apps are either totally useless ("Walter the Bavarian, a chatbot who gives trivia about Oktoberfest!") or could be better solved by classical programming ("a GPT that automatically converts currencies to USD!), it seems AI developers have struggled to find a single marketable use case of LLMs outside of codegen.
At a very minimum I'd say they'll have a way to "chat" with the apps to ask questions / do stuff. Either via APIs that the OS calls or integrated in the app via whatever frameworks will rise to handle this. In 5 to 10 years we'll probably see this. At a very minimum searching docs and "guide" the users through the functionality / do it straight up when asked.
Basically what chatgpt did for chatbots, but at app level. There are lots of apps that take a long time to master. But the average joe doesn't need to master them. If I want to lightly edit some photos, I know photoshop can do it, but I have no clue where that specific thing is in the menus, because I haven't used it in 10 years. But it would be cool to type in a chat box "take all the pictures from my sd card, adjust the colors, straighten the ones that need it, and put them in my Pictures folder under "trip to the sea". And then I can go do something else for the 30-60 minutes it would have taken me to google how to do all of that, or script something, etc.
The ideea of an assistant that can work like that isn't that far-fetched today, IMO. The apps need to expose some APIs, and the "os" needs an language -> action model capable enough to handle basic stuff for average joes. I'd bet good money sonnet3.5 + proper APIs + a bit of fine-tuning could do it today for 50%+ of average user cases.
An AI engineer with some experience today can easily pull down 700K-1M TC a year at a bigtech. They must be unaware that the "barriers are coming down fast". In reality it's a full time job to just _keep up with research_. And another full time job to try and do something meaningful with it. So yeah, you can all be AI engineers, but don't expect an easy ride.
I run an ML team in fintech, and am currently hiring. If a resumè came across my desk with this "skill set" I'd laugh my ass off. My job and my team's jobs are extremely stressful because we ship models that impact people's finances. If we mess up our customers lose their goddamn minds.
Most of the ML candidates I see now are all "working with LLMs". Most of the ML engineers I know in the industry who are actually shipping valuable models, are not.
Cool, you made a chatbot that annoys your users.
Let me know when you've shipped a fraud model that requires four 9's, 100ms latency, with 50,000 calls an hour, 80% recall and 50% precision.
What does 50% precision mean in this case? I know 50% accuracy might mean P(fraud_predicted | fraud) = 50%, but I don't understand what you mean by precision?
I mean, sure, anyone can cobble together Ollama and a wrapper API and an adjusted system prompt, or go serious with Bumblebee on the BEAM.
But that's akin to web devs of old that stitched up some cruft in Perl or PHP and got their databases wiped by someone entering a SQL username. Yes, it kind of works under ideal conditions, but can you fix it when it breaks? Can you hedge against all or most relevant risks?
Probably not. Don't put it your toys into production, and don't tell other people you're a professional at it until you know how to fix and hedge and can be transparent about it with the people giving you money.
I have been running the 32B parameters qwen2.5-coder model on my 32G M2 Mac and and it is a huge help with coding.
The llama3.3-vision model does a great job processing screen shots. Small models like smollm2:latest can process a lot of text locally, very fast.
Open source front ends like Open WebUI are improving rapidly.
All the tools are lining up for do it yourself local AI.
The only commercial vendor right now that I think is doing a fairly good job at an integrated AI workflow is Google. Last month I had all my email directed to my gmail account, and the Gemini Advanced web app did a really good job integrating email, calendar, and google docs. Job well done. That said, I am back to using ProtonMail and trying to build local AIs for my workflows.
I am writing a book on the topic of local, personal, and private AIs.
reply