• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

Anyone can learn the technology people call "AI"

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,977
I want to strongly encourage everyone to learn data science and machine learning skills. (While you still can)

What "AI" means morphs and changes from generation to generation. Right now, the main areas are around data science and machine learning, and in particular deep learning, and generative AI.

There are communities that allow people to learn and grow their skills. Kaggle is one of the biggest ones. It is owned by Google, FYI. Make of that fact what you will.

Deeplearning.ai and Coursera have a lot of great courses on the subject. Of particular note are Generative AI for Everyone, and AI for Everyone.

OpenAI has released the capability for people to make their own "GPTs". They are in really close partnership with Microsoft. Make of that fact what you will. Stability.ai has done a lot of work trying to keep AI community driven. They have stable diffusion discord (link may become invalid after a week).

To get a more in-depth understanding, but starting from a basic level:
https://m.youtube.com/watch?v=lYWt-aCnE2U

^The above is the first video in a series put on by Cassie Kozyrkov. She was the Chief Decision Scientist at Google till she left to do something on her own.

An absolutely amazing (possibly even career launching) course set is the Deep Learning Specialization on Coursera. This is put on by Andrew Ng, Younes Bensouda Mourri, and Kian Katanforoosh.

There are all sorts of other platforms where people try to collaborate and learn. I have myself launched a local meetup to discuss ideas that started pre-pandemic and restarted after things normalized.

You may want to jump on learning these things before the pay walls get much steeper and regulatory capture sets in (like it did with the cloud/mainframe 2.0). You can see that already the big companies and institutions have control, but they still have the notion of wanting everyone involved as part of the collegiate research culture that got it where it is so far. With all the money at stake now, and the sci-fi doom-story the corporations sold to government, it is only a matter of time before things become inaccessible without going through much steeper pay walls or pedigreed gate-keeping.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,977
There is an "AI for rest of us" set of publications:

I haven't read the publications, but I found it interesting there has been such a thread of publication.
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
You may want to jump on learning these things before the pay walls get much steeper and regulatory capture sets in (like it did with the cloud/mainframe 2.0). You can see that already the big companies and institutions have control, but they still have the notion of wanting everyone involved as part of the collegiate research culture that got it where it is so far. With all the money at stake now, and the sci-fi doom-story the corporations sold to government, it is only a matter of time before things become inaccessible without going through much steeper pay walls or pedigreed gate-keeping.

Not that I'd ever disagree with learning something new, and I do view this as a new kind of 'industrial revolution' taking place, but it's hard for the average person to do more than very basic learning on it. I'd encourage people to just pay the 20-30 USD for GPT4 (or, if you want to hurt yourself, claude, mistral, or gemini)... and almost anyone can run Fooocus these days as a stand in for stable diffusion.

The real value in gaining familiarity is wrapping around the conceptual nature of most 'AI' systems - like, LLM being a good auto-fill generator, and stable diffusion being a hallucinating categorizer.

Also, LLMs and GPTs are damn good for a lot. I think it's a lot like learning to google... which by the standards of what Google returns these days, might actually be replaced with LLMs. Least, I'm starting to use Bing chat, heaven forbid, because it gives me better results.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,977
Not that I'd ever disagree with learning something new, and I do view this as a new kind of 'industrial revolution' taking place, but it's hard for the average person to do more than very basic learning on it. I'd encourage people to just pay the 20-30 USD for GPT4 (or, if you want to hurt yourself, claude, mistral, or gemini)... and almost anyone can run Fooocus these days as a stand in for stable diffusion.

The real value in gaining familiarity is wrapping around the conceptual nature of most 'AI' systems - like, LLM being a good auto-fill generator, and stable diffusion being a hallucinating categorizer.

Also, LLMs and GPTs are damn good for a lot. I think it's a lot like learning to google... which by the standards of what Google returns these days, might actually be replaced with LLMs. Least, I'm starting to use Bing chat, heaven forbid, because it gives me better results.
Getting into building new Generative AI is not for everyone. I agree. But it always starts with just using it.

It's good you bring up running open source projects like Mistral and Fooocus. I have been a bit neglectful of open source in this thread. It is generally easier for people to use bard.google.com, claude.ai or chat.openai.com or go to the stability.ai and go to their discord to do image generation.

As far as computer vision, I have been more interested in segmentation than image generation.

As far as running open-source LLMs, oobabooga, ollama, and hugginface are things I have used to run them. Running locally on my 6Gbyte VRAM GPU has been limiting, also. Though maybe AMD will increase what I can run locally without paying too much.

There are many places to try out heavier models on the cloud. The ones I have used are Google Colab and Kaggle. Howvever, hugginface's spaces maybe an option if I don't want to stick in completely free resources.

There are differing levels of difficulty and costs for generative AI (as opposed to the more traditional classification and clustering models):
Level 1 : Prompt Engineering
This is the level accessible to everyone. It is very similar to search engine input. However, the variation in skill for Gen AI prompting seems to be significantly more than for search. To get up to speed (on LLMs), this article is the best I have seen to get good quickly. Perhaps someone else can post about good image generation prompt engineering.

Level 2: Adapting the foundation models for your use
At this level I am including, Fine-Tuning, Low Rank Adaptation, Retrieval Augmentation, Pre-grounding, Post-grounding. These all also have varying levels of difficulty. Techniques and companies are popping up all the time.

Level 3: Building Foundation Models
This level is mainly reserved for large companies or specialized niches (though often these specialized models are less "foundation" models and more traditional supervised machine learning models--maybe self-supervised or semi-supervised).

This last part is what is scary. Competitive models have gotten too big for research institutions and open collaborators to make -- mixtral and llama 2 (which is only sort-of open) are good models, I guess. I have tried both. In terms of capability, they don't compare at all to GPT-4, IME.
 

ptgatsby

Well-known member
Joined
Apr 24, 2007
Messages
4,476
MBTI Type
ISTP
As far as running open-source LLMs, oobabooga, ollama, and hugginface are things I have used to run them. Running locally on my 6Gbyte VRAM GPU has been limiting, also. Though maybe AMD will increase what I can run locally without paying too much.

You can use lmstudio right out of the box, although it isn't open source, and it's a bit tricky getting the generation settings right. However, its search if the best out there in terms of ease. The others depends on basic technical ability, as oobabooga and ollama need a fair bit more experience. Search for 'thebloke' for the decent quantitized models.

Unfortunately, unless you spent a lot of hardware, it's out of reach to run anything decent locally. I'm on a 3060 12gb, and it's just not feasible to run 70b q4 in a decent amount of time.

Just going to throw in some other interesting things -
Suno.ai - for music. YMMV, sometimes it's shockingly good, a lot of the time its meh. But it's 2nd generation tech, so still pretty new.
youai.ai - chat interface that isn't special, but it's a free way of comparing models for yourself - currently, only GPT4 (1106 and turbo) cost money.
 
Last edited:
Top