The Artificial Intelligence Problem (or not)

Well boys and girls, worry not. We can melt the AI brains by dancing!


(slow build/burn on this one, if you're into this stuff, it's totally worth a couple minutes of your time)


Have you played much with openCV? I have my kit coming from a kickstarter campaign soon.

OpenCV AI Kit


313b28cdb49fcbda2dc45c4f30393e5f_original.gif




eeb9d50cb994bf5235b1305c8cfd34a0_original.gif
 
Have you played much with openCV? I have my kit coming from a kickstarter campaign soon.

OpenCV AI Kit

Not really, I've poked at it a bit, and I messed a bit with some open Kinect packages for 3D scanning. I need to stop watching other folks get their hack on, and do some myself!

Any plans for your kit? I've only just cracked the lid on the hardware scene, with Arduino and Raspberry, but it's amazing how deep you can go these days with 'custom' off the shelf boards and components. I'd love to see some results from your experimentation!
 
Not really, I've poked at it a bit, and I messed a bit with some open Kinect packages for 3D scanning. I need to stop watching other folks get their hack on, and do some myself!

Any plans for your kit? I've only just cracked the lid on the hardware scene, with Arduino and Raspberry, but it's amazing how deep you can go these days with 'custom' off the shelf boards and components. I'd love to see some results from your experimentation!

I'll be prototyping a few ideas with commercial off the shelf solutions for some clients.. typical SDR/LPR sensing approaches integrated with other sensors.
 
Resurrection because I've probably read too much fiction and not enough academic research papers on this topic.

Any recommendations for the connection between ML and communications, where the definition of "communication" is "disparate and discrete pieces of information which must be formulated into a coherent thought, relayed by any medium, and received in at least enough of its core ideas such that it can be either (1) deconstructed into those original disparate and discrete pieces or (2) forwarded to another target without compromising the integrity of original thought?" Technology in this area blurs the border between "helping you communicate your actual thoughts" to "telling you what to think, and then communicating that half-original thought 'for' you" in a way that could be deeply troubling if someone puts enough energy between it. At what point does this migrate from a somewhat useful, but mostly irritating, tool to a middleman that compromises the integrity of what we say to each other?

TLDR: the scariest "AI/ML" thing in my mind is autocorrect.
 
Last edited:
Any recommendations for the connection between ML and communications,
Not sure if you're looking for a specific application or COTS technology rec, which I don't have the foggiest about-- but here's a white paper from 2019, and a query of "machine learning artificial intelligence darpa" kicked up some interesting results.

Google Scholar

*Okay so I'm going to add something-- the intersection between human behavioral prompting using artificial intelligence (if this occurs (end user does x), this will then happen (technology does y) prompting or conditioning end user to do (AB), can simultaneously prompt AI learning in whatever program is running the above described sequence, by using the same theory/formula/code on the backend...
an example (of the last - using the same theory/formula/code on the backend) is Google keyboard.

I was an alpha tester for a mobile keyboard beginning in the mid-aughts- because...anyone on the bleeding edge of technology is a massochist.

The keyboard has come a long way since '07-08, but one thing that hasn't changed, is that my application has a huge random vocabulary, that is very specific to me and it's an example of training AI. At first, for years, it was very, very slow to learn. Now it's very fast. This isn't unique to my keyboard. This increase in speed was not because of manually adding new dictionary words. It's because of backend learning the end user doesn't see.

Not sure if this response is woefully simplistic for what you're asking using the specific definition of communication, or misses the mark entirely - and not what you're asking at all, so in that case, just disregard.

The topic is super interesting.
 
The name of that article on the link I posted above is "DARPA’s Explainable Artificial Intelligence Program," (XAI)

The paper appears a bit dry, but there are graphics that appear easily understandable. I didn't read the entire thing.

(Saying this only because demystifying is important for understanding, and because its the Internet, can come across as otherwise).

Besides that, the paper goes on to explain the evaluation process...

"XAI program’s independent government evaluator is the Naval Research Laboratory. For Phase 1, the laboratory (with IHMC’s help) prepared an evaluation framework for the TA1 teams to use as a template for..."

Looks interesting.
 
"Nobody should think auto-complete, even on steroids, is conscious," Gary Marcus, founder and CEO of Geometric Intelligence, said to CNN Business.

That's wild. That guy worked at Alphabet. The name conjures up a visual of a child's nursery with a real expensive coffee bar. What do they expect? And why did they use the word setinent? Had to look that up.
 
People seem to be pretty good here about not arguing, but maybe I just don't notice. I've heard that Star Trek has influenced society in specific ways. It's neat to have an example.
 
Back
Top