Can a neural network compute $y = x^2$?Debugging Neural Network for (Natural Language) TaggingIs ML a good solution for identifying what the user wants to do from a sentence?Which functions neural net can't approximateQ Learning Neural network for tic tac toe Input implementation problemError in Neural NetworkWhat database should I use?Reinforcement learning - How to deal with varying number of actions which do number approximationMultiple-input multiple-output CNN with custom loss functionWhy are neuron activations stored as a column vector?Learning a highly non-linear function with a small data set

Fear of getting stuck on one programming language / technology that is not used in my country

Should I outline or discovery write my stories?

Offered money to buy a house, seller is asking for more to cover gap between their listing and mortgage owed

Is it improper etiquette to ask your opponent what his/her rating is before the game?

When were female captains banned from Starfleet?

Not using 's' for he/she/it

The screen of my macbook suddenly broken down how can I do to recover

GraphicsGrid with a Label for each Column and Row

It grows, but water kills it

Did Swami Prabhupada reject Advaita?

Why did the EU agree to delay the Brexit deadline?

Why should universal income be universal?

copy and scale one figure (wheel)

Is there a name for this algorithm to calculate the concentration of a mixture of two solutions containing the same solute?

What if a revenant (monster) gains fire resistance?

Is the U.S. Code copyrighted by the Government?

Electoral considerations aside, what are potential benefits, for the US, of policy changes proposed by the tweet recognizing Golan annexation?

What does routing an IP address mean?

Did arcade monitors have same pixel aspect ratio as TV sets?

Should I stop contributing to retirement accounts?

250 Floor Tower

Is there a single word describing earning money through any means?

Why is it that I can sometimes guess the next note?

How to indicate a cut out for a product window



Can a neural network compute $y = x^2$?


Debugging Neural Network for (Natural Language) TaggingIs ML a good solution for identifying what the user wants to do from a sentence?Which functions neural net can't approximateQ Learning Neural network for tic tac toe Input implementation problemError in Neural NetworkWhat database should I use?Reinforcement learning - How to deal with varying number of actions which do number approximationMultiple-input multiple-output CNN with custom loss functionWhy are neuron activations stored as a column vector?Learning a highly non-linear function with a small data set













6












$begingroup$


In spirit of the famous Tensorflow Fizz Buzz joke and XOr problem I started to think, if it's possible to design a neural network that implements $y = x^2$ function?



Given some representation of a number (e.g. as a vector in binary form, so that number 5 is represented as [1,0,1,0,0,0,0,...]), the neural network should learn to return its square - 25 in this case.



If I could implement $y=x^2$, I could probably implement $y=x^3$ and generally any polynomial of x, and then with Taylor series I could approximate $y=sin(x)$, which would solve the Fizz Buzz problem - a neural network that can find remainder of the division.



Clearly, just the linear part of NNs won't be able to perform this task, so if we could do the multiplication, it would be happening thanks to activation function.



Can you suggest any ideas or reading on subject?










share|improve this question









New contributor




Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    6












    $begingroup$


    In spirit of the famous Tensorflow Fizz Buzz joke and XOr problem I started to think, if it's possible to design a neural network that implements $y = x^2$ function?



    Given some representation of a number (e.g. as a vector in binary form, so that number 5 is represented as [1,0,1,0,0,0,0,...]), the neural network should learn to return its square - 25 in this case.



    If I could implement $y=x^2$, I could probably implement $y=x^3$ and generally any polynomial of x, and then with Taylor series I could approximate $y=sin(x)$, which would solve the Fizz Buzz problem - a neural network that can find remainder of the division.



    Clearly, just the linear part of NNs won't be able to perform this task, so if we could do the multiplication, it would be happening thanks to activation function.



    Can you suggest any ideas or reading on subject?










    share|improve this question









    New contributor




    Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      6












      6








      6


      3



      $begingroup$


      In spirit of the famous Tensorflow Fizz Buzz joke and XOr problem I started to think, if it's possible to design a neural network that implements $y = x^2$ function?



      Given some representation of a number (e.g. as a vector in binary form, so that number 5 is represented as [1,0,1,0,0,0,0,...]), the neural network should learn to return its square - 25 in this case.



      If I could implement $y=x^2$, I could probably implement $y=x^3$ and generally any polynomial of x, and then with Taylor series I could approximate $y=sin(x)$, which would solve the Fizz Buzz problem - a neural network that can find remainder of the division.



      Clearly, just the linear part of NNs won't be able to perform this task, so if we could do the multiplication, it would be happening thanks to activation function.



      Can you suggest any ideas or reading on subject?










      share|improve this question









      New contributor




      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      In spirit of the famous Tensorflow Fizz Buzz joke and XOr problem I started to think, if it's possible to design a neural network that implements $y = x^2$ function?



      Given some representation of a number (e.g. as a vector in binary form, so that number 5 is represented as [1,0,1,0,0,0,0,...]), the neural network should learn to return its square - 25 in this case.



      If I could implement $y=x^2$, I could probably implement $y=x^3$ and generally any polynomial of x, and then with Taylor series I could approximate $y=sin(x)$, which would solve the Fizz Buzz problem - a neural network that can find remainder of the division.



      Clearly, just the linear part of NNs won't be able to perform this task, so if we could do the multiplication, it would be happening thanks to activation function.



      Can you suggest any ideas or reading on subject?







      machine-learning neural-network






      share|improve this question









      New contributor




      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited yesterday







      Boris Burkov













      New contributor




      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked yesterday









      Boris BurkovBoris Burkov

      1335




      1335




      New contributor




      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Boris Burkov is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          2 Answers
          2






          active

          oldest

          votes


















          7












          $begingroup$

          Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that :




          In the mathematical theory of artificial neural networks,
          the universal approximation theorem states that a feed-forward network
          with a single hidden layer containing a finite number of neurons can
          approximate continuous functions on compact subsets of Rn, under mild
          assumptions on the activation function




          Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function y = x^2 could be easily approximated using regression ANN.



          You can find an excellent lesson here with a notebook example.



          Also, because of such ability ANN could map complex relationships for example between an image and its labels.






          share|improve this answer









          $endgroup$








          • 2




            $begingroup$
            Thank you very much, this is exactly what I was asking for!
            $endgroup$
            – Boris Burkov
            yesterday






          • 2




            $begingroup$
            Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
            $endgroup$
            – Jeffrey
            yesterday



















          3












          $begingroup$

          I think the answer of @ShubhamPanchal is a little bit misleading. Yes, it is true that by Cybenko's universal approximation theorem we can approximate $f(x)=x^2$ with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $mathbbR^n$, under mild assumptions on the activation function.




          But the main problem is that the theorem has a very important
          limitation
          . The function needs to be defined on compact subsets of
          $mathbbR^n$
          (compact subset = bounded + closed subset). But why
          is this problematic?
          . When training the function approximator you
          will always have a finite data set. Hence, you will approximate the
          function inside a compact subset of $mathbbR^n$. But we can always
          find a point $x$ for which the approximation will probably fail. That
          being said. If you only want to approximate $f(x)=x^2$ on a compact
          subset of $mathbbR$ then we can answer your question with yes.
          But if you want to approximate $f(x)=x^2$ for all $xin mathbbR$
          then the answer is no (I exclude the trivial case in which you use
          a quadratic activation function).




          Side remark on Taylor approximation: You always have to keep in mind that a Taylor approximation is only a local approximation. If you only want to approximate a function in a predefined region then you should be able to use Taylor series. But approximating $sin(x)$ by the Taylor series evaluated at $x=0$ will give you horrible results for $xto 10000$ if you don't use enough terms in your Taylor expansion.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$








          • 1




            $begingroup$
            Nice catch! "compact set".
            $endgroup$
            – Esmailian
            yesterday










          • $begingroup$
            Many thanks, mate! Eye-opener!
            $endgroup$
            – Boris Burkov
            yesterday










          • $begingroup$
            @Esmailian: Thank you :).
            $endgroup$
            – MachineLearner
            yesterday










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Boris Burkov is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47787%2fcan-a-neural-network-compute-y-x2%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          7












          $begingroup$

          Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that :




          In the mathematical theory of artificial neural networks,
          the universal approximation theorem states that a feed-forward network
          with a single hidden layer containing a finite number of neurons can
          approximate continuous functions on compact subsets of Rn, under mild
          assumptions on the activation function




          Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function y = x^2 could be easily approximated using regression ANN.



          You can find an excellent lesson here with a notebook example.



          Also, because of such ability ANN could map complex relationships for example between an image and its labels.






          share|improve this answer









          $endgroup$








          • 2




            $begingroup$
            Thank you very much, this is exactly what I was asking for!
            $endgroup$
            – Boris Burkov
            yesterday






          • 2




            $begingroup$
            Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
            $endgroup$
            – Jeffrey
            yesterday
















          7












          $begingroup$

          Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that :




          In the mathematical theory of artificial neural networks,
          the universal approximation theorem states that a feed-forward network
          with a single hidden layer containing a finite number of neurons can
          approximate continuous functions on compact subsets of Rn, under mild
          assumptions on the activation function




          Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function y = x^2 could be easily approximated using regression ANN.



          You can find an excellent lesson here with a notebook example.



          Also, because of such ability ANN could map complex relationships for example between an image and its labels.






          share|improve this answer









          $endgroup$








          • 2




            $begingroup$
            Thank you very much, this is exactly what I was asking for!
            $endgroup$
            – Boris Burkov
            yesterday






          • 2




            $begingroup$
            Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
            $endgroup$
            – Jeffrey
            yesterday














          7












          7








          7





          $begingroup$

          Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that :




          In the mathematical theory of artificial neural networks,
          the universal approximation theorem states that a feed-forward network
          with a single hidden layer containing a finite number of neurons can
          approximate continuous functions on compact subsets of Rn, under mild
          assumptions on the activation function




          Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function y = x^2 could be easily approximated using regression ANN.



          You can find an excellent lesson here with a notebook example.



          Also, because of such ability ANN could map complex relationships for example between an image and its labels.






          share|improve this answer









          $endgroup$



          Neural networks are also called as the universal function approximation which is based in the universal function approximation theorem. It states that :




          In the mathematical theory of artificial neural networks,
          the universal approximation theorem states that a feed-forward network
          with a single hidden layer containing a finite number of neurons can
          approximate continuous functions on compact subsets of Rn, under mild
          assumptions on the activation function




          Meaning a ANN with a non linear activation function could map the function which relates the input with the output. The function y = x^2 could be easily approximated using regression ANN.



          You can find an excellent lesson here with a notebook example.



          Also, because of such ability ANN could map complex relationships for example between an image and its labels.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered yesterday









          Shubham PanchalShubham Panchal

          35117




          35117







          • 2




            $begingroup$
            Thank you very much, this is exactly what I was asking for!
            $endgroup$
            – Boris Burkov
            yesterday






          • 2




            $begingroup$
            Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
            $endgroup$
            – Jeffrey
            yesterday













          • 2




            $begingroup$
            Thank you very much, this is exactly what I was asking for!
            $endgroup$
            – Boris Burkov
            yesterday






          • 2




            $begingroup$
            Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
            $endgroup$
            – Jeffrey
            yesterday








          2




          2




          $begingroup$
          Thank you very much, this is exactly what I was asking for!
          $endgroup$
          – Boris Burkov
          yesterday




          $begingroup$
          Thank you very much, this is exactly what I was asking for!
          $endgroup$
          – Boris Burkov
          yesterday




          2




          2




          $begingroup$
          Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
          $endgroup$
          – Jeffrey
          yesterday





          $begingroup$
          Although true, it a very bad idea to learn that. I fail to see where any generalization power would arise from. NN shine when there's something to generalize. Like CNN for vision that capture patterns, or RNN that can capture trends.
          $endgroup$
          – Jeffrey
          yesterday












          3












          $begingroup$

          I think the answer of @ShubhamPanchal is a little bit misleading. Yes, it is true that by Cybenko's universal approximation theorem we can approximate $f(x)=x^2$ with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $mathbbR^n$, under mild assumptions on the activation function.




          But the main problem is that the theorem has a very important
          limitation
          . The function needs to be defined on compact subsets of
          $mathbbR^n$
          (compact subset = bounded + closed subset). But why
          is this problematic?
          . When training the function approximator you
          will always have a finite data set. Hence, you will approximate the
          function inside a compact subset of $mathbbR^n$. But we can always
          find a point $x$ for which the approximation will probably fail. That
          being said. If you only want to approximate $f(x)=x^2$ on a compact
          subset of $mathbbR$ then we can answer your question with yes.
          But if you want to approximate $f(x)=x^2$ for all $xin mathbbR$
          then the answer is no (I exclude the trivial case in which you use
          a quadratic activation function).




          Side remark on Taylor approximation: You always have to keep in mind that a Taylor approximation is only a local approximation. If you only want to approximate a function in a predefined region then you should be able to use Taylor series. But approximating $sin(x)$ by the Taylor series evaluated at $x=0$ will give you horrible results for $xto 10000$ if you don't use enough terms in your Taylor expansion.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$








          • 1




            $begingroup$
            Nice catch! "compact set".
            $endgroup$
            – Esmailian
            yesterday










          • $begingroup$
            Many thanks, mate! Eye-opener!
            $endgroup$
            – Boris Burkov
            yesterday










          • $begingroup$
            @Esmailian: Thank you :).
            $endgroup$
            – MachineLearner
            yesterday















          3












          $begingroup$

          I think the answer of @ShubhamPanchal is a little bit misleading. Yes, it is true that by Cybenko's universal approximation theorem we can approximate $f(x)=x^2$ with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $mathbbR^n$, under mild assumptions on the activation function.




          But the main problem is that the theorem has a very important
          limitation
          . The function needs to be defined on compact subsets of
          $mathbbR^n$
          (compact subset = bounded + closed subset). But why
          is this problematic?
          . When training the function approximator you
          will always have a finite data set. Hence, you will approximate the
          function inside a compact subset of $mathbbR^n$. But we can always
          find a point $x$ for which the approximation will probably fail. That
          being said. If you only want to approximate $f(x)=x^2$ on a compact
          subset of $mathbbR$ then we can answer your question with yes.
          But if you want to approximate $f(x)=x^2$ for all $xin mathbbR$
          then the answer is no (I exclude the trivial case in which you use
          a quadratic activation function).




          Side remark on Taylor approximation: You always have to keep in mind that a Taylor approximation is only a local approximation. If you only want to approximate a function in a predefined region then you should be able to use Taylor series. But approximating $sin(x)$ by the Taylor series evaluated at $x=0$ will give you horrible results for $xto 10000$ if you don't use enough terms in your Taylor expansion.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$








          • 1




            $begingroup$
            Nice catch! "compact set".
            $endgroup$
            – Esmailian
            yesterday










          • $begingroup$
            Many thanks, mate! Eye-opener!
            $endgroup$
            – Boris Burkov
            yesterday










          • $begingroup$
            @Esmailian: Thank you :).
            $endgroup$
            – MachineLearner
            yesterday













          3












          3








          3





          $begingroup$

          I think the answer of @ShubhamPanchal is a little bit misleading. Yes, it is true that by Cybenko's universal approximation theorem we can approximate $f(x)=x^2$ with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $mathbbR^n$, under mild assumptions on the activation function.




          But the main problem is that the theorem has a very important
          limitation
          . The function needs to be defined on compact subsets of
          $mathbbR^n$
          (compact subset = bounded + closed subset). But why
          is this problematic?
          . When training the function approximator you
          will always have a finite data set. Hence, you will approximate the
          function inside a compact subset of $mathbbR^n$. But we can always
          find a point $x$ for which the approximation will probably fail. That
          being said. If you only want to approximate $f(x)=x^2$ on a compact
          subset of $mathbbR$ then we can answer your question with yes.
          But if you want to approximate $f(x)=x^2$ for all $xin mathbbR$
          then the answer is no (I exclude the trivial case in which you use
          a quadratic activation function).




          Side remark on Taylor approximation: You always have to keep in mind that a Taylor approximation is only a local approximation. If you only want to approximate a function in a predefined region then you should be able to use Taylor series. But approximating $sin(x)$ by the Taylor series evaluated at $x=0$ will give you horrible results for $xto 10000$ if you don't use enough terms in your Taylor expansion.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$



          I think the answer of @ShubhamPanchal is a little bit misleading. Yes, it is true that by Cybenko's universal approximation theorem we can approximate $f(x)=x^2$ with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of $mathbbR^n$, under mild assumptions on the activation function.




          But the main problem is that the theorem has a very important
          limitation
          . The function needs to be defined on compact subsets of
          $mathbbR^n$
          (compact subset = bounded + closed subset). But why
          is this problematic?
          . When training the function approximator you
          will always have a finite data set. Hence, you will approximate the
          function inside a compact subset of $mathbbR^n$. But we can always
          find a point $x$ for which the approximation will probably fail. That
          being said. If you only want to approximate $f(x)=x^2$ on a compact
          subset of $mathbbR$ then we can answer your question with yes.
          But if you want to approximate $f(x)=x^2$ for all $xin mathbbR$
          then the answer is no (I exclude the trivial case in which you use
          a quadratic activation function).




          Side remark on Taylor approximation: You always have to keep in mind that a Taylor approximation is only a local approximation. If you only want to approximate a function in a predefined region then you should be able to use Taylor series. But approximating $sin(x)$ by the Taylor series evaluated at $x=0$ will give you horrible results for $xto 10000$ if you don't use enough terms in your Taylor expansion.







          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          share|improve this answer



          share|improve this answer








          edited yesterday





















          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          answered yesterday









          MachineLearnerMachineLearner

          30410




          30410




          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          New contributor





          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.







          • 1




            $begingroup$
            Nice catch! "compact set".
            $endgroup$
            – Esmailian
            yesterday










          • $begingroup$
            Many thanks, mate! Eye-opener!
            $endgroup$
            – Boris Burkov
            yesterday










          • $begingroup$
            @Esmailian: Thank you :).
            $endgroup$
            – MachineLearner
            yesterday












          • 1




            $begingroup$
            Nice catch! "compact set".
            $endgroup$
            – Esmailian
            yesterday










          • $begingroup$
            Many thanks, mate! Eye-opener!
            $endgroup$
            – Boris Burkov
            yesterday










          • $begingroup$
            @Esmailian: Thank you :).
            $endgroup$
            – MachineLearner
            yesterday







          1




          1




          $begingroup$
          Nice catch! "compact set".
          $endgroup$
          – Esmailian
          yesterday




          $begingroup$
          Nice catch! "compact set".
          $endgroup$
          – Esmailian
          yesterday












          $begingroup$
          Many thanks, mate! Eye-opener!
          $endgroup$
          – Boris Burkov
          yesterday




          $begingroup$
          Many thanks, mate! Eye-opener!
          $endgroup$
          – Boris Burkov
          yesterday












          $begingroup$
          @Esmailian: Thank you :).
          $endgroup$
          – MachineLearner
          yesterday




          $begingroup$
          @Esmailian: Thank you :).
          $endgroup$
          – MachineLearner
          yesterday










          Boris Burkov is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Boris Burkov is a new contributor. Be nice, and check out our Code of Conduct.












          Boris Burkov is a new contributor. Be nice, and check out our Code of Conduct.











          Boris Burkov is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47787%2fcan-a-neural-network-compute-y-x2%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          getting Checkpoint VPN SSL Network Extender working in the command lineHow to connect to CheckPoint VPN on Ubuntu 18.04LTS?Will the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayVPN SSL Network Extender in FirefoxLinux Checkpoint SNX tool configuration issuesCheck Point - Connect under Linux - snx + OTPSNX VPN Ububuntu 18.XXUsing Checkpoint VPN SSL Network Extender CLI with certificateVPN with network manager (nm-applet) is not workingWill the Linux ( red-hat ) Open VPNC Client connect to checkpoint or nortel VPN gateways?VPN client for linux machine + support checkpoint gatewayImport VPN config files to NetworkManager from command lineTrouble connecting to VPN using network-manager, while command line worksStart a VPN connection with PPTP protocol on command linestarting a docker service daemon breaks the vpn networkCan't connect to vpn with Network-managerVPN SSL Network Extender in FirefoxUsing Checkpoint VPN SSL Network Extender CLI with certificate

          Cannot Extend partition with GParted The 2019 Stack Overflow Developer Survey Results Are In Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election ResultsCan't increase partition size with GParted?GParted doesn't recognize the unallocated space after my current partitionWhat is the best way to add unallocated space located before to Ubuntu 12.04 partition with GParted live?I can't figure out how to extend my Arch home partition into free spaceGparted Linux Mint 18.1 issueTrying to extend but swap partition is showing as Unknown in Gparted, shows proper from fdiskRearrange partitions in gparted to extend a partitionUnable to extend partition even though unallocated space is next to it using GPartedAllocate free space to root partitiongparted: how to merge unallocated space with a partition

          Marilyn Monroe Ny fiainany manokana | Jereo koa | Meny fitetezanafanitarana azy.