The Artificial Intelligence Problem (or not)

Board and Seize

Marine Recon
Verified SOF
Joined
Jan 29, 2013
Messages
441
Do we want a specific thread for this topic @Board and Seize ? I'm very lost, so I'll leave it you.
Haha, well, the previous exchange between @Florida173 and myself was a bit of a sidetrack to the intended topic. I'm happy to keep AI-ish stuff going here, and also to decipher some of the jargony stuff.

Back to the top of me resurrecting this thread. You're likely to have had news of various "AI"*
text generators popping up in your feed over the past two years or so.

The first big one that hit the news was GPT-2, which when given a text prompt would generate new text to follow. This is what you can interact with at Talk to Transformer. This uses a specialized kind of ML (Machine Learning)** data structure called a Neural Net (NN). One particular kind of NN is the Generative Adversarial Network (GAN). GANs are found across almost the full spectrum of current and cutting edge AI, from computer-vision object-detection and -defeat (all those captchas you do are your contribution to creating/munging the datasets used to train these networks) to twitter bots, deepfakes and more.

Though perhaps unintuitive, prediction and detection/recognition are fundamentally the same task - just with opposite valence (as an analogy, speakers can be used as microphones - in this case sound production and detection). So you take two NNs. Train them on the same dataset (you show them a bunch of examples of 'right' and 'wrong' at whatever task - say recognizing objects in photos, as with the recent captchas). Then you set them against each other. You task one to predict/produce, and the other to filter/detect. Then you recursively train them on the results of that. As a result, they get successively better. This can be imagined as any of the OpenAI Alpha___ game AIs. That's more approachable, because you just set them to play against each other and record the results, then fast forward and have them do this thousands, millions, or billions of times. The results of the 'learning' is basically an insane number of variables (in the algebraic/cs sense) and weights for each of them.

Okay, back to GPT-n. So this is a NN trained on a huge corpus of basically all reddit comments from the top many subreddits over a period of years. This is a devastating amount of data. Then the model has some large number of 'parameters'. This is basically the variables and weights - and these are pretty blackboxed in that a human can't really penetrate what they 'mean' or correlate to.

GPT-2 dropped in early 2019 with ~1.5 billion parameters. This was the big splash in the news, and they didn't actually release the full model for fear of what shenanigans would occur. They released successively 'larger' models (more params) and then we started seeing stuff like TtT and AI Dungeon.

Then in May of 2020, GPT-3 came out. Basically a much larger, more refined version. It's up to something like 175 billion params now. Since the model is effectively just these params and their relationships (it's a database) it's way too large to run on typical commercial home equipment. Most of your ways to interact with these models (AI Dungeon and TtT again, as well as others) utilize cloud computing resources and provide you with a terminal via webpage.

Phew!

So the new thing that those OpenAI maniacs just dropped today is an evolution of these text generation/'understanding' AIs to create new images - as in draw them. You can write a ranndom description like "sketch of a russian dancing bear with a scorpion tattoo" and it will generate an image that matches. And even from this first glimpse, it is shockingly good.

This took me entirely too long to respond, sorry.

*I get really pedantic over the term Artificial Intelligence, and hold a strict definition. This is othertimes called Strong AI or General AI or True AI. These terms are in reaction to marketing bs that calls every instance of ML AI. To be clear, there is no known extant instance of AI. It is a future possibility.

**Machine Learning is a more accurate umbrella term for all of the crap that gets called AI today. That just means any program that uses any one of a wide range of statistical analyses (typically regressions) to separate signal from noise or pull useful info out of a heap of uninteresting or confusing info.


edit: Oh yeah AI Dungeon! This uses the GPT-2 (or GPT-3 if you pay for it) model to act as a reactive text-based story-teller or game master. It gives you a prompt, and then you start writing, narrating your actions/reactions/thoughts/etc. And then AI Dungeon tells you what happens next. It is freaking awesome, but it isn't perfect, and garbage-in-garbage-out. It can get stuck in a loop, and if you nudge it towards teenage fantasy, it will run with it (all that reddit training yo!).
 

Florida173

SOF Support
Joined
Oct 14, 2008
Messages
1,948
Location
NCR
Well boys and girls, worry not. We can melt the AI brains by dancing!


(slow build/burn on this one, if you're into this stuff, it's totally worth a couple minutes of your time)


Have you played much with openCV? I have my kit coming from a kickstarter campaign soon.

OpenCV AI Kit


313b28cdb49fcbda2dc45c4f30393e5f_original.gif




eeb9d50cb994bf5235b1305c8cfd34a0_original.gif
 

Board and Seize

Marine Recon
Verified SOF
Joined
Jan 29, 2013
Messages
441
Have you played much with openCV? I have my kit coming from a kickstarter campaign soon.

OpenCV AI Kit

Not really, I've poked at it a bit, and I messed a bit with some open Kinect packages for 3D scanning. I need to stop watching other folks get their hack on, and do some myself!

Any plans for your kit? I've only just cracked the lid on the hardware scene, with Arduino and Raspberry, but it's amazing how deep you can go these days with 'custom' off the shelf boards and components. I'd love to see some results from your experimentation!
 

Florida173

SOF Support
Joined
Oct 14, 2008
Messages
1,948
Location
NCR
Not really, I've poked at it a bit, and I messed a bit with some open Kinect packages for 3D scanning. I need to stop watching other folks get their hack on, and do some myself!

Any plans for your kit? I've only just cracked the lid on the hardware scene, with Arduino and Raspberry, but it's amazing how deep you can go these days with 'custom' off the shelf boards and components. I'd love to see some results from your experimentation!

I'll be prototyping a few ideas with commercial off the shelf solutions for some clients.. typical SDR/LPR sensing approaches integrated with other sensors.
 

Xenophon

Marine
Verified Military
Joined
Apr 22, 2020
Messages
17
Resurrection because I've probably read too much fiction and not enough academic research papers on this topic.

Any recommendations for the connection between ML and communications, where the definition of "communication" is "disparate and discrete pieces of information which must be formulated into a coherent thought, relayed by any medium, and received in at least enough of its core ideas such that it can be either (1) deconstructed into those original disparate and discrete pieces or (2) forwarded to another target without compromising the integrity of original thought?" Technology in this area blurs the border between "helping you communicate your actual thoughts" to "telling you what to think, and then communicating that half-original thought 'for' you" in a way that could be deeply troubling if someone puts enough energy between it. At what point does this migrate from a somewhat useful, but mostly irritating, tool to a middleman that compromises the integrity of what we say to each other?

TLDR: the scariest "AI/ML" thing in my mind is autocorrect.
 
Last edited:

Andoni

Verified Military
Joined
Jun 3, 2017
Messages
438
Location
CONUS
Any recommendations for the connection between ML and communications,
Not sure if you're looking for a specific application or COTS technology rec, which I don't have the foggiest about-- but here's a white paper from 2019, and a query of "machine learning artificial intelligence darpa" kicked up some interesting results.

Google Scholar

*Okay so I'm going to add something-- the intersection between human behavioral prompting using artificial intelligence (if this occurs (end user does x), this will then happen (technology does y) prompting or conditioning end user to do (AB), can simultaneously prompt AI learning in whatever program is running the above described sequence, by using the same theory/formula/code on the backend...
an example (of the last - using the same theory/formula/code on the backend) is Google keyboard.

I was an alpha tester for a mobile keyboard beginning in the mid-aughts- because...anyone on the bleeding edge of technology is a massochist.

The keyboard has come a long way since '07-08, but one thing that hasn't changed, is that my application has a huge random vocabulary, that is very specific to me and it's an example of training AI. At first, for years, it was very, very slow to learn. Now it's very fast. This isn't unique to my keyboard. This increase in speed was not because of manually adding new dictionary words. It's because of backend learning the end user doesn't see.

Not sure if this response is woefully simplistic for what you're asking using the specific definition of communication, or misses the mark entirely - and not what you're asking at all, so in that case, just disregard.

The topic is super interesting.
 

Andoni

Verified Military
Joined
Jun 3, 2017
Messages
438
Location
CONUS
The name of that article on the link I posted above is "DARPA’s Explainable Artificial Intelligence Program," (XAI)

The paper appears a bit dry, but there are graphics that appear easily understandable. I didn't read the entire thing.

(Saying this only because demystifying is important for understanding, and because its the Internet, can come across as otherwise).

Besides that, the paper goes on to explain the evaluation process...

"XAI program’s independent government evaluator is the Naval Research Laboratory. For Phase 1, the laboratory (with IHMC’s help) prepared an evaluation framework for the TA1 teams to use as a template for..."

Looks interesting.
 
Top