Go Back   Freethought Forum > The Marketplace > The Sciences

Reply
 
Thread Tools Display Modes
  #51  
Old 06-15-2016, 05:28 PM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

The failure to create human-level AI has been a great disappointment for me. :(

But it looks like we are a little bit there.

Artificial intelligence has had two main approaches: top-down, by explicit specification, and bottom-up, by learning. The top-down approach seemed to be the way to go at first, but the bottom-up approach has increased in popularity over the last few decades because of ever-increasing computing power.

What Google is doing with those images is essentially bottom-up learning. Those surreal dog-face ones were likely made with a module trained on recognizing dog faces, so that module tends to see a dog face wherever it looks.
Reply With Quote
  #52  
Old 06-15-2016, 06:30 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Images: 3
Default Re: Neural Networks (aka Inceptionism)

I've done some of the top-down stuff, and I don't think anyone has ever seriously believed that the prescriptive approach in itself was going to result in strong AI. It was, and still is, used in narrow AIs, though.

I get the impression sometimes that a lot of people think there's going to be some big invention or discovery and all of a sudden, we'll have a single, fully functioning strong AI announced in all the headlines or something, like TA-DA singularity!, but it's probably not going to happen like that. People have been working on this for a long, long time, building on previous work and advancements. I am not all that up on how learning systems work and what progress has been made, but they are cumulative, and they do seem, at least to me, to be accelerating very quickly right now. I mean, yeah, recognizing and mimicking dog faces isn't a super-important thing or anything, but it's really mindblowing that they can do that. (Fuck Google, still. Doesn't make up for everything else, just so we're all clear on that.)

I suspect we're a lot further off and also a lot closer than most people realize. And don't discount some of the robust narrow AIs that are already out there. Everyone's distracted fretting about philosophimical questions about consciousness and stuff, but it's probably narrow AIs that are going to destroy everything you know and love.
Reply With Quote
Thanks, from:
Angakuk (06-16-2016), But (10-25-2017), Ensign Steve (06-15-2016), JoeP (06-15-2016), Kyuss Apollo (06-27-2018)
  #53  
Old 06-15-2016, 09:58 PM
Ensign Steve's Avatar
Ensign Steve Ensign Steve is offline
California Sober
 
Join Date: Jul 2004
Location: Silicon Valley
Gender: Bender
Images: 66
Default Re: Neural Networks (aka Inceptionism)

FYI, I'm taking a two-day "crash course" on Tensor Flow at work tomorrow and Friday. I don't really have anything going on this weekend, so I'll probably make the singularity happen Saturday or Sunday. Or I'll get caught up on Game of Thrones. We'll see what I'm in the mood for..
__________________
:kiwf::smurf:
Reply With Quote
Thanks, from:
Angakuk (06-16-2016), Dingfod (06-16-2016), Dragar (10-26-2016), lisarea (06-15-2016)
  #54  
Old 06-16-2016, 05:28 AM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

lisarea, that's likely correct. But a problem with a bottom-up approach is that it can be difficult to interpret what its parameter values mean.

The history of AI is rather interesting.
History of artificial intelligence - Wikipedia
Timeline of artificial intelligence - Wikipedia
Progress in artificial intelligence - Wikipedia
Applications of artificial intelligence - Wikipedia

AI was speculated on for centuries, and I think that the culmination of such speculations was Alan Turing's classic paper Computing Machinery and Intelligence A.M. Turing

He proposed the Turing Test in it. In present-day terms, it is whether it is possible to write a chatbot that gives responses indistinguishable from a human interlocutor's responses.

Something like this:

Quote:
Q: Please write me a sonnet on the subject of the Forth Bridge.
A : Count me out on this one. I never could write poetry.
Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.
Q: Do you play chess?
A: Yes.
Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play?
A: (After a pause of 15 seconds) R-R8 mate.
The addition result is incorrect: it's really 105721. Alan Turing was imagining simulating human thought processes, erroneous result and all.

The chess notation is old descriptive notation, something that's gone out of style. Nowadays, we use algebraic notation: I have my king at e1 and no other pieces. You have your king at e3 and your rook at h8, and also no other pieces. What move do you make? The answer: Rh1 checkmate.

Alan Turing also considered several counterarguments to the possibility of human-level AI, like a theological one about souls, "heads in the sand", Goedel's theorem, informality of behavior, etc.

-

Not long after he wrote his article, actual AI programming started. In 1951, some programmers wrote programs for early computers that could play checkers and chess.

In 1956 was a famous conference on AI at Dartmouth College in New Hampshire. The proposal for it included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".

AI had some successes for the next 18 years, like being able to solve high-school-algebra word problems and being able to understand what goes on in "microworlds", like sets of blocks that can be moved around and stacked.

Some AI advocates were optimistic enough back then to expect human-scale AI in the next few decades.

-

But in 1974, the first "AI winter" happened. Funding agencies were reluctant to finance much work in AI, and this was because AI research was not delivering anything close to its more optimistic expectations. Part of it was that the expecters rather grossly underestimated how difficult AI would be. Part of it was the enormous amount of computing power needed for some applications, like artificial vision, and part of it was the huge quantities of data that are "common sense" for us. Both were far beyond the computers of the 1950's and 1960's, though they are not as far nowadays.

-

Then in 1980, AI funding came back, with such successes as expert systems and artificial neural networks.

But then, in 1987, another AI winter, another funding drought.

Then in 1993, AI came back again, and it has continued to the present without another AI winter.

There are at least two differences:
  • AI has proven to be much more useful than previously.
  • AI researchers often don't call it AI.
Reply With Quote
  #55  
Old 06-16-2016, 05:56 AM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by JoeP View Post
Interesting.

I've done something similar by creating statistical models. Essentially Markov-chain models. Let's say that I want to generate random words that resemble the words in some corpus, like a dictionary.

A simple approach would be to make a table of the frequency of each letter, and include an end-of-word symbol with the letters. One then generates a word by choosing a letter at random according to this frequency table, and continuing until one gets an end-of-word symbol.

That tends to produce nonsense, so a better approach is to look back to the previous letter, and construct a current-letter frequency table for each previous letter found. One also needs a beginning-of-word symbol here. Then, when generating a word, one looks back to the previous letter and uses its frequency table. It generates more reasonable-looking words, because it makes vowels tend to follow consonants and vice versa, and also gets vowel and consonant clusters right.

One can extend this approach to more than one letter, though one ought to make current-letter frequency tables for each partial sequence of previous letters as well as each complete sequence. That would take care of the case of not being able to look back all the way.

Here are some results. The * is for a word that was used for setting up the lookbacks and the frequency tables.

Lookback: 0
oa
olsrfipioiomnotuegonrryeean
oo
e *
rueampruecit
eaetnymicrkoraeoeefalina
igcsaa
aesoce
e *
itoi

Lookback: 1
pa *
ontolionomminntuleriroukean
pm
c *
qulanorulbis
dagun
medrioragndelanera
heduca
betice
c *

Lookback: 2
paver *
pershomless
tensly
undal
pote *
quadiroural
unagon
meess
palteleallocyped
abucetoff

Lookback: 3
paunirreligilling
telliopterapy
pse
quadrinshakroman
mulic
intaghainamend
hebracylistely
unsol
collingly *
lusia

Lookback: 4
cacoperilability
gant *
poodlessness
gilt *
nutcraft
archite
moraceously
antproof *
overdish
disturbed *

Lookback: 5
paunch *
enneahedron *
denotoxicator
unwithdraw
delphian *
multiramolendidad
nazaretta
barnaby *
unsolvability
pretractedly


I've also created a similar Random Sentence Maker.
Reply With Quote
Thanks, from:
JoeP (06-16-2016)
  #56  
Old 06-16-2016, 12:34 PM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

One of the first proposed AI applications was translating natural languages, but that turns out to be much more difficult than expected. One can come up with lots of rules, but there are usually lots of exceptions to them, so an explicit rule-based approach is very difficult.

An alternative is statistical translation, but that requires large amounts of already-translated text. Google uses text from the UN, the EU, and other such sources.

It also involves a *lot* of number crunching, a *lot* more than what was feasible in the early decades of computers.

Statistical machine translation - Wikipedia, used by Google and Bing.

Google Translate - Wikipedia -- it now supports 103 languages, with 14 in the works.

Bing - Wikipedia, the free encyclopedia -- its translator is no slouch either.
Reply With Quote
  #57  
Old 06-16-2016, 12:40 PM
lpetrich's Avatar
lpetrich lpetrich is offline
Member
 
Join Date: Jul 2004
Location: Lebanon, OR, USA
Gender: Male
Posts: DXXIII
Default Re: Neural Networks (aka Inceptionism)

Google's image search has an interesting feature: search for similar-looking images. That must require some artificial vision, like preparing a sort of summary of an image for quick initial comparisons.
Reply With Quote
  #58  
Old 10-26-2016, 02:18 AM
But's Avatar
But But is offline
This is the title that appears beneath your name on your posts.
 
Join Date: Jun 2005
Gender: Male
Posts: MVDCCCLXXIV
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by But View Post
This dog-face-eyes stuff is getting boring. It's now time to move on to pornographic images.
open_nsfw
Reply With Quote
Thanks, from:
Ensign Steve (10-28-2016), JoeP (10-26-2016), Kyuss Apollo (06-27-2018), lisarea (10-26-2016), SR71 (10-26-2016)
  #59  
Old 10-26-2016, 07:00 AM
thedoc's Avatar
thedoc thedoc is offline
I'm Deplorable.
 
Join Date: Mar 2011
Posts: XMMCCCXCVI
Default Re: Neural Networks (aka Inceptionism)

The real images were interesting, the altered images were fucked up, get over it.
__________________
The highest form of ignorance is when you reject something you don’t know anything about. Wayne Dyer
Reply With Quote
  #60  
Old 10-26-2016, 08:43 AM
erimir's Avatar
erimir erimir is offline
Projecting my phallogos with long, hard diction
 
Join Date: Sep 2005
Location: Dee Cee
Gender: Male
Posts: XMMMCMVII
Images: 11
Default Re: Neural Networks (aka Inceptionism)

I like how the concert series replaces the musicians with giant cocks. Seems appropriate.
Reply With Quote
Thanks, from:
But (10-26-2016), JoeP (10-26-2016), Kyuss Apollo (06-27-2018)
  #61  
Old 10-26-2016, 03:37 PM
But's Avatar
But But is offline
This is the title that appears beneath your name on your posts.
 
Join Date: Jun 2005
Gender: Male
Posts: MVDCCCLXXIV
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by thedoc View Post
The real images were interesting, the altered images were fucked up, get over it.
What real images?
Reply With Quote
Thanks, from:
Angakuk (10-27-2016)
  #62  
Old 03-18-2018, 07:10 AM
erimir's Avatar
erimir erimir is offline
Projecting my phallogos with long, hard diction
 
Join Date: Sep 2005
Location: Dee Cee
Gender: Male
Posts: XMMMCMVII
Images: 11
Default Re: Neural Networks (aka Inceptionism)

So now there's a website where you can make your own images using neural networks based on Deep Dream.

One of the options is to give it a "style" image, and it will recreate a target image in the "style" (roughly speaking) of a chosen image (or some preset styles).

For example, here's the :ff: logo in three styles:
Attached Images
File Type: jpg dream_9vca3trjrpi.jpg (203.1 KB, 7 views)
File Type: jpg dream_5moispppq3p.jpg (130.3 KB, 6 views)
File Type: jpg 0a2189520e0b424877f1e11fbe6033b93c13c9be.jpg (192.3 KB, 6 views)
Reply With Quote
Thanks, from:
But (03-18-2018), Ensign Steve (04-11-2018), JoeP (03-18-2018), Kyuss Apollo (06-27-2018), lisarea (03-19-2018)
  #63  
Old 03-18-2018, 01:23 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMCMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Dammit there goes the rest of the day :D
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
But (03-18-2018)
  #64  
Old 04-19-2018, 06:24 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMCMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Imgur

Embedding might work:

__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
Ari (07-06-2018), Dragar (04-26-2018), Kyuss Apollo (06-27-2018)
  #65  
Old 04-19-2018, 07:33 PM
BrotherMan's Avatar
BrotherMan BrotherMan is offline
A Very Gentle Bort
 
Join Date: Jan 2005
Location: Bortlandia
Gender: Male
Posts: XVMMXLIX
Blog Entries: 5
Images: 63
Default Re: Neural Networks (aka Inceptionism)


:hypnotoad:
__________________
\V/_
I COVLD TEACh YOV BVT I MVST LEVY A FEE
Reply With Quote
  #66  
Old 04-19-2018, 07:44 PM
lisarea's Avatar
lisarea lisarea is offline
Solitary, poor, nasty, brutish, and short
 
Join Date: Jul 2004
Posts: XVMMMDCXLII
Blog Entries: 1
Images: 3
Default Re: Neural Networks (aka Inceptionism)

I think it's a selfie.
Reply With Quote
Thanks, from:
BrotherMan (04-20-2018), JoeP (04-19-2018)
  #67  
Old 04-19-2018, 08:00 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMCMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by BrotherMan View Post


:hypnotoad:
:fixed:
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
BrotherMan (04-20-2018)
  #68  
Old 06-25-2018, 06:23 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMCMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Trip on LSD without the LSD! Have fun
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
BrotherMan (06-25-2018), Ensign Steve (06-27-2018), Kyuss Apollo (06-27-2018), lisarea (06-25-2018)
  #69  
Old 07-05-2018, 02:03 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCIII
Default Re: Neural Networks (aka Inceptionism)

Interesting paper.

Despite the success of neural networks (NNs), there is still a concern among many over their "black box" nature. Why do they work? Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models. This view will have various implications for NNs, e.g. providing an explanation for why convergence problems arise in NNs, and it gives rough guidance on avoiding overfitting. In addition, we use this phenomenon to predict and confirm a multicollinearity property of NNs not previously reported in the literature. Most importantly, given this loose correspondence, one may choose to routinely use polynomial models instead of NNs, thus avoiding some major problems of the latter, such as having to set many tuning parameters and dealing with convergence issues. We present a number of empirical results; in each case, the accuracy of the polynomial approach matches or exceeds that of NN approaches. A many-featured, open-source software package, polyreg, is available.
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner
Reply With Quote
Thanks, from:
But (07-06-2018), Ensign Steve (07-06-2018), fragment (07-06-2018), JoeP (07-05-2018)
  #70  
Old 07-05-2018, 02:16 PM
JoeP's Avatar
JoeP JoeP is offline
Solipsist
 
Join Date: Jul 2004
Location: Kolmannessa kerroksessa
Gender: Male
Posts: XXXVMMCMXI
Images: 18
Default Re: Neural Networks (aka Inceptionism)

Do we need to understand
Quote:
polynomial regression models
convergence problems
overfitting
multicollinearity (:whoa:)
to have a chance of understanding this 28 page paper?
__________________

:roadrun:
Free thought! Please take one!

:unitedkingdom:   :southafrica:   :unitedkingdom::finland:   :finland:
Reply With Quote
Thanks, from:
Kamilah Hauptmann (07-05-2018), Limoncello (07-05-2018)
  #71  
Old 07-05-2018, 03:20 PM
Limoncello's Avatar
Limoncello Limoncello is offline
ChuckF's sock
 
Join Date: Dec 2016
Gender: Female
Posts: MMMDLVI
Blog Entries: 1
Images: 5
Default Re: Neural Networks (aka Inceptionism)

:prettycolors:
__________________
#jeSuisLimoncello


:lemon:..
Reply With Quote
  #72  
Old 07-06-2018, 04:03 AM
But's Avatar
But But is offline
This is the title that appears beneath your name on your posts.
 
Join Date: Jun 2005
Gender: Male
Posts: MVDCCCLXXIV
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by Dragar View Post
Interesting paper.
Despite the success of neural networks (NNs), there is still a concern among many over their "black box" nature. Why do they work? Here we present a simple analytic argument that NNs are in fact essentially polynomial regression models.
Very, very simple indeed, and everything is "essentially" everything else. Too much woolly prose for my taste. Equations are suspiciously absent.

Quote:
As a toy example, take the activation function to be a(t) = t^2. Then outputs of that first layer will be quadratic functions of u and v.
No shit.

Toy example = I can't rigorously show anything beyond that, but I have to publish this paper, so there you go.

Quote:
Now let’s turn to more realistic activation functions, we note that they themselves can usually be approximated by polynomials.
Holy shit. Seriously? So in physics, everything is a harmonic oscillator because the potential can be approximated by a parabola.

Quote:
For general activation functions and implementations, we can at least say that the function is at close to a polynomial, by appealing to the famous Stone-Weierstrass Theorem [Katznelson and Rudin(1961)], which states that any continuous function on a compact set can be approximated uniformly by polynomials.
:gah:

Quote:
The informal arguments above could be made mathematically rigorous.
:orly:

Quote:
In that manner, we can generate polynomials which are dense in the space of regression functions. But let’s take a closer look.
Yeah, we could make this rigorous, but instead let's take a closer look. :yup:

That's what I hate about a lot of computer science papers - a lot of prose with lots of excuses and then we run a program until we get nice numbers and dump the logs in the paper.
Reply With Quote
Thanks, from:
Dragar (07-06-2018), Ensign Steve (07-06-2018), fragment (07-06-2018)
  #73  
Old 07-06-2018, 03:28 PM
Dragar's Avatar
Dragar Dragar is offline
Now in six dimensions!
 
Join Date: Jan 2005
Location: The Cotswolds
Gender: Male
Posts: VCIII
Default Re: Neural Networks (aka Inceptionism)

Quote:
Originally Posted by But View Post
So in physics, everything is a harmonic oscillator because the potential can be approximated by a parabola.
Sidney Coleman once remarked that the career path of a theoretical physicist is solving the simple harmonic oscillator in ever increasing levels of abstraction. :ffwink:

It reminded me a lot of this much earlier paper, which you may prefer.

https://projecteuclid.org/euclid.ss/1177010638
__________________
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. -Eugene Wigner

Last edited by Dragar; 07-06-2018 at 03:41 PM.
Reply With Quote
Thanks, from:
But (07-06-2018), Pan Narrans (07-06-2018)
Reply

  Freethought Forum > The Marketplace > The Sciences


Currently Active Users Viewing This Thread: 4 (0 members and 4 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

 

All times are GMT +1. The time now is 09:44 PM.


Powered by vBulletin® Version 3.8.2
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Page generated in 0.53369 seconds with 15 queries