Search Results

Search found 18341 results on 734 pages for 'neural network'.

Page 1/734 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • C# Neural Networks with Encog

    - by JoshReuben
    Neural Networks ·       I recently read a book Introduction to Neural Networks for C# , by Jeff Heaton. http://www.amazon.com/Introduction-Neural-Networks-C-2nd/dp/1604390093/ref=sr_1_2?ie=UTF8&s=books&qid=1296821004&sr=8-2-spell. Not the 1st ANN book I've perused, but a nice revision.   ·       Artificial Neural Networks (ANNs) are a mechanism of machine learning – see http://en.wikipedia.org/wiki/Artificial_neural_network , http://en.wikipedia.org/wiki/Category:Machine_learning ·       Problems Not Suited to a Neural Network Solution- Programs that are easily written out as flowcharts consisting of well-defined steps, program logic that is unlikely to change, problems in which you must know exactly how the solution was derived. ·       Problems Suited to a Neural Network – pattern recognition, classification, series prediction, and data mining. Pattern recognition - network attempts to determine if the input data matches a pattern that it has been trained to recognize. Classification - take input samples and classify them into fuzzy groups. ·       As far as machine learning approaches go, I thing SVMs are superior (see http://en.wikipedia.org/wiki/Support_vector_machine ) - a neural network has certain disadvantages in comparison: an ANN can be overtrained, different training sets can produce non-deterministic weights and it is not possible to discern the underlying decision function of an ANN from its weight matrix – they are black box. ·       In this post, I'm not going to go into internals (believe me I know them). An autoassociative network (e.g. a Hopfield network) will echo back a pattern if it is recognized. ·       Under the hood, there is very little maths. In a nutshell - Some simple matrix operations occur during training: the input array is processed (normalized into bipolar values of 1, -1) - transposed from input column vector into a row vector, these are subject to matrix multiplication and then subtraction of the identity matrix to get a contribution matrix. The dot product is taken against the weight matrix to yield a boolean match result. For backpropogation training, a derivative function is required. In learning, hill climbing mechanisms such as Genetic Algorithms and Simulated Annealing are used to escape local minima. For unsupervised training, such as found in Self Organizing Maps used for OCR, Hebbs rule is applied. ·       The purpose of this post is not to mire you in technical and conceptual details, but to show you how to leverage neural networks via an abstraction API - Encog   Encog ·       Encog is a neural network API ·       Links to Encog: http://www.encog.org , http://www.heatonresearch.com/encog, http://www.heatonresearch.com/forum ·       Encog requires .Net 3.5 or higher – there is also a Silverlight version. Third-Party Libraries – log4net and nunit. ·       Encog supports feedforward, recurrent, self-organizing maps, radial basis function and Hopfield neural networks. ·       Encog neural networks, and related data, can be stored in .EG XML files. ·       Encog Workbench allows you to edit, train and visualize neural networks. The Encog Workbench can generate code. Synapses and layers ·       the primary building blocks - Almost every neural network will have, at a minimum, an input and output layer. In some cases, the same layer will function as both input and output layer. ·       To adapt a problem to a neural network, you must determine how to feed the problem into the input layer of a neural network, and receive the solution through the output layer of a neural network. ·       The Input Layer - For each input neuron, one double value is stored. An array is passed as input to a layer. Encog uses the interface INeuralData to hold these arrays. The class BasicNeuralData implements the INeuralData interface. Once the neural network processes the input, an INeuralData based class will be returned from the neural network's output layer. ·       convert a double array into an INeuralData object : INeuralData data = new BasicNeuralData(= new double[10]); ·       the Output Layer- The neural network outputs an array of doubles, wraped in a class based on the INeuralData interface. ·        The real power of a neural network comes from its pattern recognition capabilities. The neural network should be able to produce the desired output even if the input has been slightly distorted. ·       Hidden Layers– optional. between the input and output layers. very much a “black box”. If the structure of the hidden layer is too simple it may not learn the problem. If the structure is too complex, it will learn the problem but will be very slow to train and execute. Some neural networks have no hidden layers. The input layer may be directly connected to the output layer. Further, some neural networks have only a single layer. A single layer neural network has the single layer self-connected. ·       connections, called synapses, contain individual weight matrixes. These values are changed as the neural network learns. Constructing a Neural Network ·       the XOR operator is a frequent “first example” -the “Hello World” application for neural networks. ·       The XOR Operator- only returns true when both inputs differ. 0 XOR 0 = 0 1 XOR 0 = 1 0 XOR 1 = 1 1 XOR 1 = 0 ·       Structuring a Neural Network for XOR  - two inputs to the XOR operator and one output. ·       input: 0.0,0.0 1.0,0.0 0.0,1.0 1.0,1.0 ·       Expected output: 0.0 1.0 1.0 0.0 ·       A Perceptron - a simple feedforward neural network to learn the XOR operator. ·       Because the XOR operator has two inputs and one output, the neural network will follow suit. Additionally, the neural network will have a single hidden layer, with two neurons to help process the data. The choice for 2 neurons in the hidden layer is arbitrary, and often comes down to trial and error. ·       Neuron Diagram for the XOR Network ·       ·       The Encog workbench displays neural networks on a layer-by-layer basis. ·       Encog Layer Diagram for the XOR Network:   ·       Create a BasicNetwork - Three layers are added to this network. the FinalizeStructure method must be called to inform the network that no more layers are to be added. The call to Reset randomizes the weights in the connections between these layers. var network = new BasicNetwork(); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(2)); network.AddLayer(new BasicLayer(1)); network.Structure.FinalizeStructure(); network.Reset(); ·       Neural networks frequently start with a random weight matrix. This provides a starting point for the training methods. These random values will be tested and refined into an acceptable solution. However, sometimes the initial random values are too far off. Sometimes it may be necessary to reset the weights again, if training is ineffective. These weights make up the long-term memory of the neural network. Additionally, some layers have threshold values that also contribute to the long-term memory of the neural network. Some neural networks also contain context layers, which give the neural network a short-term memory as well. The neural network learns by modifying these weight and threshold values. ·       Now that the neural network has been created, it must be trained. Training a Neural Network ·       construct a INeuralDataSet object - contains the input array and the expected output array (of corresponding range). Even though there is only one output value, we must still use a two-dimensional array to represent the output. public static double[][] XOR_INPUT ={ new double[2] { 0.0, 0.0 }, new double[2] { 1.0, 0.0 }, new double[2] { 0.0, 1.0 }, new double[2] { 1.0, 1.0 } };   public static double[][] XOR_IDEAL = { new double[1] { 0.0 }, new double[1] { 1.0 }, new double[1] { 1.0 }, new double[1] { 0.0 } };   INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL); ·       Training is the process where the neural network's weights are adjusted to better produce the expected output. Training will continue for many iterations, until the error rate of the network is below an acceptable level. Encog supports many different types of training. Resilient Propagation (RPROP) - general-purpose training algorithm. All training classes implement the ITrain interface. The RPROP algorithm is implemented by the ResilientPropagation class. Training the neural network involves calling the Iteration method on the ITrain class until the error is below a specific value. The code loops through as many iterations, or epochs, as it takes to get the error rate for the neural network to be below 1%. Once the neural network has been trained, it is ready for use. ITrain train = new ResilientPropagation(network, trainingSet);   for (int epoch=0; epoch < 10000; epoch++) { train.Iteration(); Debug.Print("Epoch #" + epoch + " Error:" + train.Error); if (train.Error > 0.01) break; } Executing a Neural Network ·       Call the Compute method on the BasicNetwork class. Console.WriteLine("Neural Network Results:"); foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "," + pair.Input[1] + ", actual=" + output[0] + ",ideal=" + pair.Ideal[0]); } ·       The Compute method accepts an INeuralData class and also returns a INeuralData object. Neural Network Results: 0.0,0.0, actual=0.002782538818034049,ideal=0.0 1.0,0.0, actual=0.9903741937121177,ideal=1.0 0.0,1.0, actual=0.9836807956566187,ideal=1.0 1.0,1.0, actual=0.0011646072586172778,ideal=0.0 ·       the network has not been trained to give the exact results. This is normal. Because the network was trained to 1% error, each of the results will also be within generally 1% of the expected value.

    Read the article

  • Neural Network settings for fast training

    - by danpalmer
    I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it. The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases. There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate. The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations. Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again. This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it. Thank you!

    Read the article

  • Neural network input preprocessing

    - by TND
    It's clear that the effectiveness of a neural network depends strongly on the format you give it to work with. You want to preprocess it into the most convenient form you can algorithmically get to, so that the neural network doesn't have to account for that itself. I'm working on a little project that (surprise!) is going to be using neural networks. My future goal is to eventually use NEAT, which I'm really excited about. Anyway, one of my ideas involves moving entities in continuous 2D space, from a top-down perspective (this would be a really cool game AI). Of course, unless these guys are blind, they're going to be able to see the world around them. There's a lot of different ways this information could be fed into the network. One interesting but expensive way is to simply render a top-down "view" of things, with the entities as dots on the picture, and feed that in. I was hoping for something much simpler to use (at least at first), such as a list of the x (maybe 7 or so) nearest entities and their position in relative polar coordinates, orientation, health, etc., but I'm trying to think of the best way to do it. My first instinct was to order them by distance, which would inherently also train the neural network to consider those more "important". However, I was thinking- what if there's two entities that are nearly the same distance away? They could easily alternate indexes in that list, confusing the network. My question is, is there a better way of representing this? Essentially, the issue is the network needs a good way of keeping track of who's who, while knowing (by being inputted) relevant information about the list of entities it can see. Thanks!

    Read the article

  • Continuous output in Neural Networks

    - by devoured elysium
    How can I set Neural Networks so they accept and output a continuous range of values instead of a discrete ones? From what I recall from doing a Neural Network class a couple of years ago, the activation function would be a sigmoid, which yields a value between 0 and 1. If I want my neural network to yield a real valued scalar, what should I do? I thought maybe if I wanted a value between 0 and 10 I could just multiply the value by 10? What if I have negative values? Is this what people usually do or is there any other way? What about the input? Thanks

    Read the article

  • Neural Network Always Produces Same/Similar Outputs for Any Input

    - by l33tnerd
    I have a problem where I am trying to create a neural network for Tic-Tac-Toe. However, for some reason, training the neural network causes it to produce nearly the same output for any given input. I did take a look at Artificial neural networks benchmark, but my network implementation is built for neurons with the same activation function for each neuron, i.e. no constant neurons. To make sure the problem wasn't just due to my choice of training set (1218 board states and moves generated by a genetic algorithm), I tried to train the network to reproduce XOR. The logistic activation function was used. Instead of using the derivative, I multiplied the error by output*(1-output) as some sources suggested that this was equivalent to using the derivative. I can put the Haskell source on HPaste, but it's a little embarrassing to look at. The network has 3 layers: the first layer has 2 inputs and 4 outputs, the second has 4 inputs and 1 output, and the third has 1 output. Increasing to 4 neurons in the second layer didn't help, and neither did increasing to 8 outputs in the first layer. I then calculated errors, network output, bias updates, and the weight updates by hand based on http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf to make sure there wasn't an error in those parts of the code (there wasn't, but I will probably do it again just to make sure). Because I am using batch training, I did not multiply by x in equation (4) there. I am adding the weight change, though http://www.faqs.org/faqs/ai-faq/neural-nets/part2/section-2.html suggests to subtract it instead. The problem persisted, even in this simplified network. For example, these are the results after 500 epochs of batch training and of incremental training. Input |Target|Output (Batch) |Output(Incremental) [1.0,1.0]|[0.0] |[0.5003781562785173]|[0.5009731800870864] [1.0,0.0]|[1.0] |[0.5003740346965251]|[0.5006347214672715] [0.0,1.0]|[1.0] |[0.5003734471544522]|[0.500589332376345] [0.0,0.0]|[0.0] |[0.5003674110937019]|[0.500095157458231] Subtracting instead of adding produces the same problem, except everything is 0.99 something instead of 0.50 something. 5000 epochs produces the same result, except the batch-trained network returns exactly 0.5 for each case. (Heck, even 10,000 epochs didn't work for batch training.) Is there anything in general that could produce this behavior? Also, I looked at the intermediate errors for incremental training, and the although the inputs of the hidden/input layers varied, the error for the output neuron was always +/-0.12. For batch training, the errors were increasing, but extremely slowly and the errors were all extremely small (x10^-7). Different initial random weights and biases made no difference, either. Note that this is a school project, so hints/guides would be more helpful. Although reinventing the wheel and making my own network (in a language I don't know well!) was a horrible idea, I felt it would be more appropriate for a school project (so I know what's going on...in theory, at least. There doesn't seem to be a computer science teacher at my school). EDIT: Two layers, an input layer of 2 inputs to 8 outputs, and an output layer of 8 inputs to 1 output, produces much the same results: 0.5+/-0.2 (or so) for each training case. I'm also playing around with pyBrain, seeing if any network structure there will work. Edit 2: I am using a learning rate of 0.1. Sorry for forgetting about that. Edit 3: Pybrain's "trainUntilConvergence" doesn't get me a fully trained network, either, but 20000 epochs does, with 16 neurons in the hidden layer. 10000 epochs and 4 neurons, not so much, but close. So, in Haskell, with the input layer having 2 inputs & 2 outputs, hidden layer with 2 inputs and 8 outputs, and output layer with 8 inputs and 1 output...I get the same problem with 10000 epochs. And with 20000 epochs. Edit 4: I ran the network by hand again based on the MIT PDF above, and the values match, so the code should be correct unless I am misunderstanding those equations. Some of my source code is at http://hpaste.org/42453/neural_network__not_working; I'm working on cleaning my code somewhat and putting it in a Github (rather than a private Bitbucket) repository. All of the relevant source code is now at https://github.com/l33tnerd/hsann.

    Read the article

  • Plain-English tutorial on artificial neural networks?

    - by Stuart
    I've Googled, StackOverflowed, everything, and I cannot seem to find a tutorial I can understand. I understand the concept of genetic algorithms, and how to implement them, (Though I haven't tried) but I cannot grasp the concept of neural networks. I know vaguely how they work... And that's about it. Could someone direct me to a tutorial that could help someone who has not even graduated middle school yet? Sure, I'm several years ahead of the majority of people my grade, but I don't understand summation, (which I apparently need if I don't want a simple binary output) vectors, and other things that I apparently should know. Is there a simple, bare-bones tutorial for neural networks? After I learn the basics, I'll proceed to more difficult ones. Preferably, they would be in Java. Thanks!

    Read the article

  • Neural Networks test cases

    - by Betamoo
    Does increasing the number of test cases in case of Precision Neural Networks may led to problems (like over-fitting for example)..? Does it always good to increase test cases number? Will that always lead to conversion ? If no, what are these cases.. an example would be better.. Thanks,

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • WCF web service with Neural Network

    - by Gary Frank
    I am developing a web service that performs object recognition. It will be available for testing as soon as enough code has been developed, and then officially when it is finished. It is based on a radically new type of artificial neural network that I designed. Its goal is to recognize any type of object within an image. Besides the WCF web service, the project will also create a website to test and demonstrate the web service. Here is a link with more information. http://www.indiegogo.com/VOR

    Read the article

  • Resilient backpropagation neural network - question about gradient

    - by Raf
    Hello Guys, First I want to say that I'm really new to neural networks and I don't understand it very good ;) I've made my first C# implementation of the backpropagation neural network. I've tested it using XOR and it looks it work. Now I would like change my implementation to use resilient backpropagation (Rprop - http://en.wikipedia.org/wiki/Rprop). The definition says: "Rprop takes into account only the sign of the partial derivative over all patterns (not the magnitude), and acts independently on each "weight". Could somebody tell me what partial derivative over all patterns is? And how should I compute this partial derivative for a neuron in hidden layer. Thanks a lot UPDATE: My implementation base on this Java code: www_.dia.fi.upm.es/~jamartin/downloads/bpnn.java My backPropagate method looks like this: public double backPropagate(double[] targets) { double error, change; // calculate error terms for output double[] output_deltas = new double[outputsNumber]; for (int k = 0; k < outputsNumber; k++) { error = targets[k] - activationsOutputs[k]; output_deltas[k] = Dsigmoid(activationsOutputs[k]) * error; } // calculate error terms for hidden double[] hidden_deltas = new double[hiddenNumber]; for (int j = 0; j < hiddenNumber; j++) { error = 0.0; for (int k = 0; k < outputsNumber; k++) { error = error + output_deltas[k] * weightsOutputs[j, k]; } hidden_deltas[j] = Dsigmoid(activationsHidden[j]) * error; } //update output weights for (int j = 0; j < hiddenNumber; j++) { for (int k = 0; k < outputsNumber; k++) { change = output_deltas[k] * activationsHidden[j]; weightsOutputs[j, k] = weightsOutputs[j, k] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumOutpus[j, k]; lastChangeWeightsForMomentumOutpus[j, k] = change; } } // update input weights for (int i = 0; i < inputsNumber; i++) { for (int j = 0; j < hiddenNumber; j++) { change = hidden_deltas[j] * activationsInputs[i]; weightsInputs[i, j] = weightsInputs[i, j] + learningRate * change + momentumFactor * lastChangeWeightsForMomentumInputs[i, j]; lastChangeWeightsForMomentumInputs[i, j] = change; } } // calculate error error = 0.0; for (int k = 0; k < outputsNumber; k++) { error = error + 0.5 * (targets[k] - activationsOutputs[k]) * (targets[k] - activationsOutputs[k]); } return error; } So can I use change = hidden_deltas[j] * activationsInputs[i] variable as a gradient (partial derivative) for checking the sing?

    Read the article

  • Neural Network problems

    - by Betamoo
    I am using an external library for Artificial Neural Networks in my project.. While testing the ANN, It gave me output of all NaN (not a number in C#) The ANN has 8 input , 5 hidden , 5 hidden , 2 output, and all activation layers are of Linear type , and it uses back-propagation, with learning rate 0.65 I used one testcase for training { -2.2, 1.3, 0.4, 0.5, 0.1, 5, 3, -5 } ,{ -0.3, 0.2 } for 1000 epoch And I tested it on { 0.2, -0.2, 5.3, 0.4, 0.5, 0, 35, 0.0 } which gave { NaN , NaN} Note: this is one example of many that produces same case... I am trying to discover whether it is a bug in the library, or an illogical configuration.. The reasons I could think of for illogical configuration: All layers should not be linear Can not have descending size layers, i.e 8-5-5-2 is bad.. Only one testcase ? Values must be in range [0,1] or [-1,1] Is any of the above reasons could be the cause of error, or there are some constraints/rules that I do not know in ANN designing..? Note: I am newbie in ANN

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Artifical neural networks height-weight problem

    - by hammid1981
    i plan to use neurodotnet for my phd thesis, but before that i just want to build some small solutions to get used to the dll structure. the first problem that i want to model using backward propagation is height-weight ratio. I have some height and weight data, i want to train my NN so that if i put in some weight then i should get correct height as a output. i have 1 input 1 hidden and 1 output layer. Now here is first of many things i cant get around :) 1. my height data is in form of 1.422, 1.5422 ... etc and the corresponding weight data is 90 95, but the NN takes the input as 0/1 or -1/1 and given the output in the same range. how to address this problem

    Read the article

  • Creating Java Neural Networks

    - by Tori Wieldt
    A new article on OTN/Java, titled “Neural Networks on the NetBeans Platform,” by Zoran Sevarac, reports on Neuroph Studio, an open source Java neural network development environment built on top of the NetBeans Platform. This article shows how to create Java neural networks for classification.From the article:“Neural networks are artificial intelligence (machine learning technology) suitable for ill-defined problems, such as recognition, prediction, classification, and control. This article shows how to create some Java neural networks for classification. Note that Neuroph Studio also has support for image recognition, text character recognition, and handwritten letter recognition...”“Neuroph Studio is a Java neural network development environment built on top of the NetBeans Platform and Neuroph Framework. It is an IDE-like environment customized for neural network development. Neuroph Studio is a GUI that sits on top of Neuroph Framework. Neuroph Framework is a full-featured Java framework that provides classes for building neural networks…”The author, Zoran Sevarac, is a teaching assistant at Belgrade University, Department for Software Engineering, and a researcher at the Laboratory for Artificial Intelligence at Belgrade University. He is also a member of GOAI Research Network. Through his research, he has been working on the development of a Java neural network framework, which was released as the open source project Neuroph.Brainy stuff. Read the article here.

    Read the article

  • Network connection delay after installing indicator-network

    - by Adrian
    Ok, so here's the thing,I installed wingpanel in UBUNTU 10.10, i removed the gnome-panels (yes, both). In the wingpanel itself there's no NETWORK indicator, so i google it and in some forums, some guy wrote that you have to install "indicator-network". I did it, and it solved the network indicator in the wingpanel, BUT now everytime i turn on my computer, the connection takes like 2 minutes or more to connect, when before installing this thing it did it immediately. How can i solve this? any help?

    Read the article

  • nm-applet gone?

    - by welp
    nm-applet seems to have disappeared from my system. I am running 12.10. Here's what I get when I check my package manager logs: ? ~ grep network-manager /var/log/dpkg.log 2012-10-06 10:37:08 upgrade network-manager-gnome:amd64 0.9.6.2-0ubuntu5 0.9.6.2-0ubuntu6 2012-10-06 10:37:08 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 remove network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 install network-manager-gnome:amd64 0.9.6.2-0ubuntu6 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 ? ~ Unfortunately, I cannot find network-manager-applet package at all: ? ~ apt-cache search network-manager-applet ? ~ Here are the contents of /etc/apt/sources.list: ? ~ cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu quantal-security main restricted deb-src http://security.ubuntu.com/ubuntu quantal-security main restricted deb http://security.ubuntu.com/ubuntu quantal-security universe deb-src http://security.ubuntu.com/ubuntu quantal-security universe deb http://security.ubuntu.com/ubuntu quantal-security multiverse deb-src http://security.ubuntu.com/ubuntu quantal-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu quantal main deb-src http://extras.ubuntu.com/ubuntu quantal main ? ~ Right now, I can't think of anything else. Happy to provide more info upon request.

    Read the article

  • "Hostile" network in the company - please comment on a security setup

    - by TomTom
    I have a little specific problem here that I want (need) to solve in a satisfactory way. My company has multiple (IPv4) networks that are controlled by our router sitting in the middle. Typical smaller shop setup. There is now one additional network that has an IP Range OUTSIDE of our control, connected to the internet with another router OUTSIDE of our control. Call it a project network that is part of another companies network and combined via VPN they set up. This means: They control the router that is used for this network and They can reconfigure things so that they can access the machines in this network. The network is physically split on our end through some VLAN capable switches as it covers three locations. At one end there is the router the other company controls. I Need / want to give the machines used in this network access to my company network. In fact, it may be good to make them part of my active directory domain. The people working on those machines are part of my company. BUT - I need to do so without compromising the security of my company network from outside influence. Any sort of router integration using the externally controlled router is out by this idea So, my idea is this: We accept the IPv4 address space and network topology in this network is not under our control. We seek alternatives to integrate those machines into our company network. The 2 concepts I came up with are: Use some sort of VPN - have the machines log into VPN. Thanks to them using modern windows, this could be transparent DirectAccess. This essentially treats the other IP space not different than any restaurant network a laptop of the company goes in. Alternatively - establish IPv6 routing to this ethernet segment. But - and this is a trick - block all IPv6 packets in the switch before they hit the third party controlled router, so that even IF they turn on IPv6 on that thing (not used now, but they could do it) they would get not a single packet. The switch can nicely do that by pulling all IPv6 traffic coming to that port into a separate VLAN (based on ethernet protocol type). Anyone sees a problem with using he switch to isolate the outer from IPv6? Any security hole? It is sad we have to treat this network as hostile - would be a lot easier - but the support personnel there is of "known dubious quality" and the legal side is clear - we can not fulfill our obligations when we integrate them into our company while they are under a jurisdiction we don't have a say in.

    Read the article

  • How to send raw data over a network?

    - by youllknow
    Hi everyone! I've same data stored in a byte-array. The data contains a IPv4 packet (which contains a udp-packet). I want to send these array raw over the network using C# (preferred) or C++. I don't want to use C#'s udp-client for example. Does anyone know how to perform this? Sorry for my bad English and thanks for your help in advance!!!

    Read the article

  • Boosting my GA with Neural Networks and/or Reinforcement Learning

    - by AlexT
    As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze. That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as 1 0 0 1 0 1 0 1 0 1 1 1 0 0... How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome? In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me: (from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm) 1. Set parameter , and environment reward matrix R 2. Initialize matrix Q as zero matrix 3. For each episode: * Select random initial state * Do while not reach goal state o Select one among all possible actions for the current state o Using this possible action, consider to go to the next state o Get maximum Q value of this next state based on all possible actions o Compute o Set the next state as the current state End Do End For The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm? If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.

    Read the article

  • Network / Internet diagnostic tool to locate an error?

    - by Jesper
    Hi, I’m facing some difficulties with the Internet / network at my work and I have trouble locating the precise error and when and how it occurs. The problem is that the client machines in the house sporadic is disconnected to the Internet. I’m some what new so I haven’t that much inside in the network and apparently neither has my predecessor. What I am requesting and hoping you guys knows about is, if there exist some kind of network monitor tool I can install and run and it will periodically check the network, the Internet connection etc. and record to logs. Then, if there suddenly arises a problem some time of the day in some part of the network or the Internet connection, I can check it perhaps the next day. I’ve just downloaded and installed Microsoft Network Monitor 3.3 application and hopeful it can give me some answers on where the instability is located but I still would like a tool to make different checks and test in some time interval. Do anyone know about such a program or another kind of performance / diagnostic tool / method I can use? Sincere Jesper

    Read the article

  • Make Network Manager use bridge for PPPoE instead of only working on ethernet?

    - by Azendale
    My ISP uses PPPoE on their DSL connections. I use Network Manager to connect to this using a bridged modem connected to eth0. Often, I want to test networking things, so a set myself up a KVM machine with a tap interface. I can then connect these interfaces to to virtual 'switches' by adding them to bridges. (I work for my ISP). Sometimes, I want to test cases where the PPPoE is connected more than once. For this, I would like to be able to add eth0 to my 'switch' (a bridge) so the VMs can have a 'bridged modem' connection to the internet. But I would like to still be able to run the PPPoE for my computer at the same time. Which means that I need to get network-manager to run PPPoE over the bridge (or eth0). The problem is that it considers eth0 (and the bridge) 'not managed' by network manager, so it refuses to use it. So, how can I have network manager dial PPPoE over a bridge?

    Read the article

  • Bridging: Loosing WLAN network connection with 4addr on option - Why?

    - by WitchCraft
    Question: For use with my Xen VM, I need to create a virtual network interface (vif) that is bridged to wlan0. If in /etc/network/interfaces I add auto xenbr0 iface xenbr0 inet dhcp And then later do brctl addif xenbr0 wlan0 I get this error message. can't add wlan0 to bridge xenbr0: Operation not supported I found out that Linux won't let you bridge a wireless interface in managed mode at all unless you enable the 4addr option (needed to recompile iw): iw dev wlan0 set 4addr on Afterwards brctl addif xenbr0 wlan0 works, and brctl show shows xenbr0 as bridged to wlan0. Unfortunately, as soon as I execute iw dev wlan0 set 4addr on my entire network connection is gone (no connection). As soon as then I execute iw dev wlan0 set 4addr off I reconnect and it works again. If I re-execute 4addr on, it breaks again, if I execute 4addr off, it works again. Unfortunately, I can't just turn 4addr on, activate the bridge and then turn it back off (error: device not ready). Does anybody know why I loose my connection ?

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >