Editing
Learning
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==MachineLearning== * [http://forums.fast.ai/t/non-artistic-style-transfer/1935 non-artistic-style-transfer] * [https://topos-theory.github.io/deep-neural-decision-forests/ Deep Neural Decision Forests Explained] * [http://www.deeplearningpatterns.com/doku.php/overview Deep Learning Patterns] * [http://deeplearninggallery.com/ Deep Learning Gallery] * [https://github.com/terryum/awesome-deep-learning-papers Awesome Deep Learning Papers] * [https://medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab#.in9xrnw7z NanoNets: How to use Deep Learning when you have Limited Data] * [https://medium.com/search?q=deep%20learning Deep Learning on medium.com] * [https://nuit-blanche.blogspot.com/2017/01/the-nips2016-videos-are-out.html NIPS2016] - Videos! * [https://www.technologyreview.com/s/603381/ai-software-learns-to-make-ai-software/?set=603387 ai-software-learns-to-make-ai-software] * [https://www.youtube.com/channel/UCPsUUDUlcTJuP-fRa7z85aQ/videos KDD2016 talks] * [https://www.youtube.com/channel/UCGoxKRfTs0jQP52cfHCyyRQ Center for Brains, Minds and Machines (CBMM)] * [https://www.youtube.com/user/ProfNandoDF/ Nando de Freitas] * [https://openai.com/blog/generative-models/ OpenAI.com - GANs] - Read This * [http://ciml.info/ A Course in Machine Learning] - Free Book * [https://github.com/hindupuravinash/nips2016 NIPS2016 talks] * [http://blog.evjang.com/2017/01/nips2016.html This has lots of good infos] - Includes an AndrewNG talk and recommended papers. * [https://www.reddit.com/r/MachineLearning/comments/5kxfkb/d_rmachinelearnings_2016_best_paper_award/ Reddit best papers] * [https://medium.com/@karpathy/yes-you-should-understand-backprop-e2f06eab496b#.l3zl5v7bo Yes you should understand backprop] - [https://www.youtube.com/watch?v=i94OvYb6noo CS231N lecture on backprop (claims to focus on intuition)] * Wide Residential Networks [https://github.com/szagoruyko/wide-residual-networks Impl] - [https://github.com/FlorianMuellerklein/Identity-Mapping-ResNet-Lasagne 3rd party] - [https://gist.github.com/kashif/0ba0270279a0f38280423754cea2ee1e keras impl] * [https://hackernoon.com/learning-ai-if-you-suck-at-math-part-two-practical-projects-47d7a1e4e21f#.2f5fkk5tm Learning AI if you suck at math] * [http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html CNN Tricks] * [https://www.youtube.com/watch?v=IXuHxkpO5E8 RNN CS188] * [http://neuralnetworksanddeeplearning.com/ Neural Networks and Deep Learning] - Free Online Book * [http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/ An introduction to Generative Adversarial Networks (with code in TensorFlow)] ===Ideas=== * Would it be possible to train a network on cut up sections based on how much they cause the neurons to spike? Don't train sections that have a big response already. * Would it be possible to modify the inputs to 'censor' bits that cause over-fitting. * Would it be possible to 'move' and object across the view overtime and learn that it's still the same object as a kind of data-augmentation? * Could an auto-encoder be used to synthesise convolution filters for a pre-initialisation? * Is it possible to learn a fitness function by starting in the goal state and trying to learn how to leave it. * Then learn how to leave that state for a new one. * Would be easier in a discrete, reversible, deterministic world. ** Would need to define how different 2 positions need to be to be considered different states. ** Is is possible to learn what a 'state' is? ** by taking 2 'non-goal' states and learning to move between them without triggering the goal and using a state halfway between as a new state? ** By taking a bunch of 'non-goal' states and and finding the maximum difference between them? ** Or using the distance between goal and non-goal? ** Or by using the distance between 2 non-goal states? * Need to be able to reverse the 'move out of goal'. ** If the actions are reversible and deterministic then just undo them. ** Could relearn how to get from the state to the goal ** How to determine how far away a non-goal state is? How much time/how many actions it takes to get to the goal state from the non-goal state? * What about a key/lock/door puzzle ** By default it wouldn't learn to put the key in the lock as the puzzle would either start in a solved state (or actor would be stuck behind the door). ** Could start the puzzle solved and make it become unsolved as the actor walks backwards, ie you pick up the key when you go though the unlocked door which becomes locked, then have to 'loose' the key it in the place where it's actually obtained. Is this just turning into as complex a problem as solving the puzzle in the first place? ===Courses=== * [https://www.youtube.com/watch?v=PlhFWT7vAEw Deep learning at Oxford 2015] - Nando de Freitas (Bad audio quality)... (same guy did this [https://www.youtube.com/watch?v=x1kf4Zojtb0 talk]) * [https://www.youtube.com/playlist?list=PLgKuh-lKre11GbZWneln-VZDLHyejO7YD Foundations of Machine Learning Boot Camp] + Deep Learning Specific stuff * [https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH Neural networks class - Université de Sherbrooke] - via [https://www.reddit.com/r/deeplearning/comments/5ohscp/a_complete_and_easy_to_follow_course_for/ Reddit] "A complete and easy to follow course for understanding ANNs". Seems to have some math. Short <15min lectures. * [https://www.youtube.com/playlist?list=PLE6Wd9FR--Ecf_5nCbnSQMHqORpiChfJf undergraduate machine learning at UBC 2012] * [https://www.youtube.com/playlist?list=PLE6Wd9FR--EdyJ5lbFl8UuGjecvVw66F6 Machine Learning 2013] * [http://videolectures.net/Top/Computer_Science/#p=3 videolectures.net] * [http://rll.berkeley.edu/deeprlcourse/ Berkeley - CS 294: Deep Reinforcement Learning, Spring 2017] * [https://www.youtube.com/watch?v=QjuwYbeqrQI Berkeley - CS 294-129 10/5/16] * [https://berkeley-deep-learning.github.io/cs294-dl-f16/ Berkeley - CS 294-131: Special Topics in Deep Learning Fall, 2016] * [https://www.youtube.com/playlist?list=PLkt2uSq6rBVctENoVBg1TpCC7OQi31AlC Stanford CS231n Winter 2016: Convolutional Neural Networks for Visual Recognition.] - Convnets in pratice 15min in talks about retraining just a littlebit..., at <36min talks about convolutions with 3x1 and 1x3. * [http://networkflow.net/forum/19-stanford-cs231n-convolutional-neural-networks-for-visual-recognition/ CS231N forums] * [https://www.coursera.org/learn/machine-learning/home/week/1 Coursera - Machine Learning] - Andrew Ng * [https://www.coursera.org/learn/neural-networks/home/welcome Coursera - Neural Networks] - Geoffrey Hinton * [http://deeplearning.stanford.edu/tutorial/ Stanford] - These tutorials seem cool * [http://videolectures.net/nips2010_wright_oaml/ Optimization Algorithms in ML] - NIPS2010 - Video (2010) * [https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow-iv Kadenze: Creative Applications of Deep Learning with Tensorflow] - Course seems nice. * [https://www.kadenze.com/courses/the-nature-of-code-ii Kadenze: The Nature of Code] * [https://classroom.udacity.com/courses/ud262/lessons/3625438937/concepts/6405791890923 Udacity - Machine Learning] * [https://classroom.udacity.com/courses/ud730/lessons/6370362152/concepts/63798118390923 Udacity - Deep Learning] * [https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ TensorFlow] - "Best Tensorflow+Deep Learning tutorials on YouTube" * [http://introtodeeplearning.com/index.html MIT - Intro to Deep Learning] * [https://www.youtube.com/watch?v=W1S-HSakPTM University of California, Berkeley - CS118: Artificial Intelligence] * [https://www.youtube.com/user/lexfridman/videos MIT 6.S094] - RNN and control topics. * [https://github.com/oxford-cs-deepnlp-2017/lectures Oxford - Deep Learning NLP] * [https://www.youtube.com/watch?v=5MdSE-N0bxs Connections between physics and deep learning] - Center for Brains, Minds and Machines (CBMM) * [http://www.deeplearningweekly.com/pages/open_source_deep_learning_curriculum deeplearningweekly.com] - List of opensource curriculum. Lots of courses. * [http://www.breloff.com/no-backprop/ Learning without Backpropagation] * [http://www.visualisingdata.com/2016/12/collection-significant-development-posts/ Data Visualisation Collection] * [http://deeplearning.net/datasets/ deeplearning.net Datasets] * [http://www.openml.org/ OpenML] - Has heaps of datasets. [http://machinelearningmastery.com/dropout-regularization-deep-learning-models-keras/ Dropout Info] * [https://stefanopalmieri.github.io/HyperNEAT-Adjacency-Matrix/ HyperNEAT] - HyperNEAT is a well known Neuro-Evolution algorithm * [https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/lecture-13-learning-genetic-algorithms/ Learning Genetic Algorithms] - Guy with candy. [https://www.youtube.com/watch?v=XI-I9i_GzIw&feature=em-lss How to Install OpenAI's Universe and Make a Game Bot using reinforcement learning.] ===[http://course.fast.ai/lessons/lesson1.html fast.ai]=== * Lesson 3: At <1:18:00 shows how to manipulate and fine tune a model. Says always use batch normalisation (1:40:00). Mentions the BatchNormalization layer that does batchnorm for you. Shows making a model from scratch and Ensembeling them (1:57:30). * Finished Lesson 4 - ([https://en.wikipedia.org/wiki/Geoffrey_Hinton Geoffrey_Hinton] says max pooling is bad. Talks about capsule architecture) - Use ADAM, look into the Jeremy Howard's modified ADAM. optimizer=Adam() (model.compile), Previously he talked about always using RMSprop... * Lesson 6: Says loss="sparse_categorical_crossentropy" (model.compile) allows you to avoid one hot encoding (1:26). * Lesson 7: 30min in he talks about some kdd best paper competition. Finding bounding boxes at 49min in. [https://www.youtube.com/user/sentdex/videos sentdex] [http://www.kdnuggets.com/2015/11/seven-steps-machine-learning-python.html/2 7 Steps to Mastering Machine Learning With Python] - "From classification, we look at continuous numeric prediction" [https://www.cs.cmu.edu/~ninamf/courses/601sp15/lectures.shtml Carnegie Mellon University Course] [https://www.coursera.org/learn/machine-learning/home/week/1 Coursera] [http://www.holehouse.org/mlclass/ Stanford Machine Learning Unofficial Notes] [https://classroom.udacity.com/courses/ud730/lessons/6370362152/concepts/71191606550923# Udacity Deep learning] [https://gab41.lab41.org/the-10-algorithms-machine-learning-engineers-need-to-know-f4bb63f5b2fa#.nt3w9t23o The 10 Algorithms Machine Learning Engineers Need to Know] [https://www.youtube.com/watch?v=i8FNO8r7PaE Machine learning for algorithmic trading w/ Bert Mouler] [https://www.youtube.com/watch?v=--eZESmJBXQ neural aesthetic @ schoolofma :: 10 convnet applications] [https://www.youtube.com/watch?v=ZoPt8R6F6LI artwithML] [https://www.youtube.com/playlist?list=PLg7f-TkW11iX3JlGjgbM2s8E1jKSXUTsG Royal Society on Machine Learning] [https://www.reddit.com/r/MachineLearning/ /r/machinelearning] [https://www.youtube.com/channel/UCZ0aNXTeyZK1D-_ZdQXyPhA Evolving AI Lab] [http://www.evolvingai.org/ EvolvingAI.org] [https://www.youtube.com/channel/UC9OeZkIwhzfv-_Cb7fCikLQ DeepLearning.TV] - Watched as on 2016 Aug 13 <strike>[https://www.youtube.com/watch?v=sEciSlAClL8 Tensorflow and deep learning, without a PhD, Martin Gorner, Google]</strike> ===TensorFlow=== [https://www.youtube.com/watch?v=U0ACP9J8vOU Deep Learning With Python & Tensorflow - PyConSG 2016] [https://www.youtube.com/watch?v=vQtxTZ9OA2M Intro to ML and TensorFlow Tutorial] [https://github.com/aymericdamien/TensorFlow-Examples TensorFlow Examples] [https://steemit.com/tensorflow/@seojoeschmo/the-ultimate-list-of-tensorflow-resources-books-tutorials-libraries-and-more The Ultimate List of TensorFlow Resources: Books, Tutorials, Libraries and More] [https://github.com/terryum/TensorFlow_Exercises TensorFlow_Exercises] [http://stackoverflow.com/questions/33759623/tensorflow-how-to-restore-a-previously-saved-model-python Tensorflow: How to restore a previously saved model] ===PyTorch=== [https://www.reddit.com/r/MachineLearning/comments/5pfku8/p_practical_pytorch_classifying_names_with_a/ Practical PyTorch: Classifying Names with a Character-Level RNN] ===Misc=== * Modulus layer? +1.0 -> -1.0 (modulus is apparently expensive...) * Prune out neurons that don't fire for many images as a way of regularization? ====Color Representation Ideas==== * See what effect reducing the bits of colour has on accuracy... GreyScale vs R8G8B8 vs R8G4B4, etc... * Dumb & basic - Red*255*255+green*255+blue. Should reduce the number of channels but still have precision bits left over. Maybe it needs to be offset to the centre though to help future layers... * Would it be possible for a Float32 to be used to encode 8 bits of the next color? * [https://allrgb.com/ AllRGB] - Shows images with one of every colour in it... * The way the human eye splits things up? That's like 6 separate images... * More natural mapping... * Maybe try and keep similar colours together distance wise? Hard to do with a single dimension float. * Hilbert Curve * Periodic algorithms? - [https://allrgb.com/are-they-parallel Sin,Cos,etc...], [https://allrgb.com/bling Repeating patterns], [https://allrgb.com/color-corners Partial gradient+periodic]. Isn't RGB basically just periodic anyway? * Float32 using 4d color space (but then some colours would be duplicated...] * 2xFloats merge red&blue?, green&red? (how does the eye do that?) * Separate out brightness? * Try an optimisation function similar to an embedding? * Maybe try and make similar colours further apart to exaggerate the subtle differences? * Negatives could cause the 'near' dimensional to change... Like having an extra dimension. Would duplicate colours again. * What about that Kaggle competition with the 16 band sate-lights?... * Maybe this is all useless. The convolution layer should just learn what it needs anyway... It's all probability based so it shouldn't need to be too close. This could all hurt it.
Summary:
Please note that all contributions to Hegemon Wiki may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see
Hegemon Wiki:Copyrights
for details).
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
Edit source
View history
More
Search
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Tools
What links here
Related changes
Special pages
Page information