Search Results

Search found 19446 results on 778 pages for 'naive machine learner'.

Page 66/778 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Looking for a virtual network adapter (virtual interface controller)

    - by Dawn
    I need a software that simulates a network adapter. I need the virtual adapters will be able to communicate with each other. For example, if I i have 2 virtual adapter (on the same computer): interface1-1.1.1.1 and interface2-1.1.1.2. I want the packets that will be send through interface1 will be received in interface2. I have as an option to install VMWare server, but i prefer something more specific. anyone have ideas?

    Read the article

  • Windows Mobile Development on MacBook Pro?

    - by Ted Nichols
    I am a frequent Windows Mobile application developer in need of a new development laptop. I am considering a MacBook or Macbook Pro running either Fusion from VMWare or Parallels Desktop. This will give me the option to port my applications to the iPhone depending on what MS does with WM 6.5 and 7. Has anybody tried doing Windows Mobile development using Microsoft Windows Mobile Device Center (or ActiveSync) and VS2008 on the MacBook Pro using one of these virtual machines? Does the device emulator work properly? What about debugging a Windows Mobile device over a USB cable? In general, do most USB drivers (non HID) designed for Windows work under these virtual machines? Thanks.

    Read the article

  • Prolog: Not executing code as expected.

    - by Louis
    Basically I am attempting to have an AI agent navigate a world based on given percepts. My issue is handling how the agent moves. Basically, I have created find_action/4 such that we pass in the percepts, action, current cell, and the direction the agent is facing. As it stands the entire code looks like: http://wesnoth.pastebin.com/kdNvzZ6Y My issue is mainly with lines 102 to 106. Basically, in it's current form the code does not work and the find_action is skipped even when the agent is in fact facing right (I have verified this). This broken code is as follows: % If we are headed right, take a left turn find_action([_, _, _, _, _], Action, _, right) :- retractall(facing(_)), assert(facing(up)), Action = turnleft . However, after some experimentation I have concluded that the following works: % If we are headed right, take a left turn find_action([_, _, _, _, _], Action, _, _) :- facing(right), retractall(facing(_)), assert(facing(up)), Action = turnleft . I am not entire sure why this is. I've attempted to create several identical find_action's as well, each checking a different direction using the facing(_) format, however swipl does not like this and throws an error. Any help would be greatly appreciated.

    Read the article

  • Cocoa app not launching on build & go but launching manually

    - by Matt S.
    I have quite the interesting problem. Yesterday my program worked perfectly, but now today I'm getting exc_bad_access when I hit build and go, but if I launch the app from the build folder it launches perfectly and there seems to be nothing wrong. The last bunch of lines from the debugger are: #0 0xffff07c2 in __memcpy #1 0x969f7961 in CFStringGetBytes #2 0x96a491b9 in CFStringCreateMutableCopy #3 0x991270cc in -[NSCFString mutableCopyWithZone:] #4 0x96a5572a in -[NSObject(NSObject) mutableCopy] #5 0x9913e6c7 in -[NSString stringByReplacingOccurrencesOfString:withString:options:range:] #6 0x9913e62f in -[NSString stringByReplacingOccurrencesOfString:withString:] #7 0x99181ad0 in -[NSScanner(NSDecimalNumberScanning) scanDecimal:] #8 0x991ce038 in -[NSDecimalNumberPlaceholder initWithString:locale:] #9 0x991cde75 in -[NSDecimalNumberPlaceholder initWithString:] #10 0x991ce44a in +[NSDecimalNumber decimalNumberWithString:] Why did my app work perfectly yesterday but not today?

    Read the article

  • Rails: How to test state_machine?

    - by petRUShka
    Please, help me. I'm confused. I know how to write state-driven behavior of model, but I don't know what should I write in specs... My model.rb file look class Ratification < ActiveRecord::Base belongs_to :user attr_protected :status_events state_machine :status, :initial => :boss do state :boss state :owner state :declarant state :done event :approve do transition :boss => :owner, :owner => :done end event :divert do transition [:boss, :owner] => :declarant end event :repeat do transition :declarant => :boss end end end I use state_machine gem. Please, show me the course.

    Read the article

  • Java text classification problem

    - by yox
    Hello, I have a set of Books objects, classs Book is defined as following : Class Book{ String title; ArrayList<tags> taglist; } Where title is the title of the book, example : Javascript for dummies. and taglist is a list of tags for our example : Javascript, jquery, "web dev", .. As I said a have a set of books talking about different things : IT, BIOLOGY, HISTORY, ... Each book has a title and a set of tags describing it.. I have to classify automaticaly those books into separated sets by topic, example : IT BOOKS : Java for dummies Javascript for dummies Learn flash in 30 days C++ programming HISTORY BOOKS : World wars America in 1960 Martin luther king's life BIOLOGY BOOKS : .... Do you guys know a classification algorithm/method to apply for that kind of problems ? A solution is to use an external API to define the category of the text, but the problem here is that books are in different languages : french, spanish, english ..

    Read the article

  • Tag/Keyword based recommendation

    - by Hellnar
    Hello I am wondering what algorithm would be clever to use for a tag driven e-commerce enviroment: Each item has several tags. IE: Item name: "Metallica - Black Album CD", Tags: "metallica", "black-album", "rock", "music" Each user has several tags and friends(other users) bound to them. IE: Username: "testguy", Interests: "python", "rock", "metal", "computer-science" Friends: "testguy2", "testguy3" I need to generate recommendations to such users by checking their interest tags and generating recommendations in a sophisticated way. Ideas: A Hybrid recommendation algorithm can be used as each user has friends.(mixture of collaborative + context based recommendations). Maybe using user tags, similar users (peers) can be found to generate recommendations. Maybe directly matching tags between users and items via tags. Any suggestion is welcome. Any python based library is also welcome as I will be doing this experimental engine on python language.

    Read the article

  • Are there programs that iteratively write new programs?

    - by chris
    For about a year I have been thinking about writing a program that writes programs. This would primarily be a playful exercise that might teach me some new concepts. My inspiration came from negentropy and the ability for order to emerge from chaos and new chaos to arise out of order in infinite succession. To be more specific, the program would start by writing a short random string. If the string compiles the programs will log it for later comparison. If the string does not compile the program will try to rewrite it until it does compile. As more strings (mini 'useless' programs) are logged they can be parsed for similarities and used to generate a grammar. This grammar can then be drawn on to write more strings that have a higher probability of compilation than purely random strings. This is obviously more than a little silly, but I thought it would be fun to try and grow a program like this. And as a byproduct I get a bunch of unique programs that I can visualize and call art. I'll probably write this in Ruby due to its simple syntax and dynamic compilation and then I will visualize in processing using ruby-processing. What I would like to know is: Is there a name for this type of programming? What currently exists in this field? Who are the primary contributors? BONUS! - In what ways can I procedurally assign value to output programs beyond compiles(y/n)? I may want to extend the functionality of this program to generate a program based on parameters, but I want the program to define those parameters through running the programs that compile and assigning meaning to the programs output. This question is probably more involved than reasonable for a bonus, but if you can think of a simple way to get something like this done in less than 23 lines or one hyperlink, please toss it into your response. I know that this is not quite meta-programming and from the little I know of AI and generative algorithms they are usually more goal oriented than what I am thinking. What would be optimal is a program that continually rewrites and improves itself so I don't have to ^_^

    Read the article

  • Rails: Multi-Step New User Signup Form (FSM?)

    - by neezer
    I've read the "Create Multi-Step Wizard" in Advanced Rails Recipes. I've also read and re-read the documentation for the updated FSM I'm using called Workflow, and looked here and here. The Advanced Rails Recipe focuses on records (quizzes) that already exist, and doesn't cover creating new ones. The Workflow docs don't cover any code for controllers or views, so I've no idea what to do with all this model magic, and the last two links barely touch on implementation either. From the aforementioned resources, I have a good understanding of what a FSM in Rails is and how to play with it in the console or IRB, but I've got very little direction or understanding how to implement one into my Rails app. What I would like is this: a simple, multi-step user signup process. Step 1: User enters in their critical details (with validations). Step 2: User enters in their search criteria, for their profile (with validations). Step 3: User agrees to the Terms of Service (with validations). Step 4: User is greeted by a confirmation page, including a link that takes them to their newly created account. I'd also like full navigation between the steps and full capture (saves to the database) with each transition. Can someone please give me a clear implementation of something similar to this? I would LOVE an example app that includes a multi-step signup process where I can look at the code (FULL source code--models AND controllers and views) under the hood, but I've been unable to find anything like that. Any guidance would be appreciated! EDIT: Please help make this a Railscast! Ryan B. (a.k.a. Superman), if you're reading this, we need you! http://feedback.railscasts.com/forums/77-episode-suggestions/suggestions/35553-multi-step-forms-and-wizards

    Read the article

  • WITH_OBJECT_HEADERS enabled GC from Dalvik?

    - by Wonil
    Hello, As I know Dalvik VM does not support generational GC as default. But, I found "WITH_OBJECT_HEADERS" compilation flag which could be related with generational GC from HeapInternal.h file. typedef struct DvmHeapChunk { #if WITH_OBJECT_HEADERS u4 header; const Object *parent; const Object *parentOld; const Object *markFinger; const Object *markFingerOld; u2 birthGeneration; u2 markCount; u2 scanCount; u2 oldMarkGeneration; u2 markGeneration; u2 oldScanGeneration; u2 scanGeneration; #endif Does anyone try to build Dalvik with this option enabled? Do you know anything about generational GC support from Dalvik? Regards, Wonil.

    Read the article

  • Ngram IDF smoothing

    - by adi92
    I am trying to use IDF scores to find interesting phrases in my pretty huge corpus of documents. I basically need something like Amazon's Statistically Improbable Phrases, i.e. phrases that distinguish a document from all the others The problem that I am running into is that some (3,4)-grams in my data which have super-high idf actually consist of component unigrams and bigrams which have really low idf.. For example, "you've never tried" has a very high idf, while each of the component unigrams have very low idf.. I need to come up with a function that can take in document frequencies of an n-gram and all its component (n-k)-grams and return a more meaningful measure of how much this phrase will distinguish the parent document from the rest. If I were dealing with probabilities, I would try interpolation or backoff models.. I am not sure what assumptions/intuitions those models leverage to perform well, and so how well they would do for IDF scores. Anybody has any better ideas?

    Read the article

  • Candidate Elimination Question---Please help!

    - by leon
    Hi , I am doing a question on Candidate Elimination Algorithm. I am a little confused with the general boundary G. Here is an example, I got G and S to the fourth case, but I am not sure with the last case. Sunny,Warm,Normal,Strong,Warm,Same,EnjoySport=yes Sunny,Warm,High,Strong,Warm,Same,EnjoySport=yes Rainy,Cold,High,Strong,Warm,Change,EnjoySport=no Sunny,Warm,High,Strong,Cool,Change,EnjoySport=yes Sunny,Warm,Normal,Weak,Warm,Same,EnjoySport=no What I have here is : S 0 :{0,0,0,0,0,0} S 1 :{Sunny,Warm,Normal,Strong,Warm,Same} S 2 , S 3 : {Sunny,Warm,?,Strong,Warm,Same} S 4 :{Sunny,Warm,?,Strong,?,?} G 4 :{Sunny,?,?,?,?,?,?,Warm,?,?,?,?} G 3 :{Sunny,?,?,?,?,?,?,Warm,?,?,?,?,?,?,?,?,?,Same} G 0 , G 1 , G 2 : {?,?,?,?,?,?} What would be the result of G5? Is it G5 empty? {}? or {???Strong??) ? Thanks

    Read the article

  • which Server I have to buy?

    - by sri
    Hello, Recently i have registered startup (home office). My intension is to have virtual(company) server for LAMP website development(virtual). Initially thinking, 2-5 people will be logging virtually to the server. a. but not sure where to shop or what to shop. b. Should I go for my own Server or i have to share some server. c. If i have to go My Server, should I go with rack or tower model. d. which brand and model i have to buy. d. If I have to go with shared(cloud), not sure which is the best place It will be great help, if some one provide in site. Thanks in advance. Sri

    Read the article

  • Neural Networks test cases

    - by Betamoo
    Does increasing the number of test cases in case of Precision Neural Networks may led to problems (like over-fitting for example)..? Does it always good to increase test cases number? Will that always lead to conversion ? If no, what are these cases.. an example would be better.. Thanks,

    Read the article

  • Operant conditioning algorithm?

    - by Ken
    What's the best way to implement real time operant conditioning (supervised reward/punishment-based learning) for an agent? Should I use a neural network (and what type)? Or something else? I want the agent to be able to be trained to follow commands like a dog. The commands would be in the form of gestures on a touchscreen. I want the agent to be able to be trained to follow a path (in continuous 2D space), make behavioral changes on command (modeled by FSM state transitions), and perform sequences of actions. The agent would be in a simulated physical environment.

    Read the article

  • Automated Legal Processing

    - by Chris S
    Will it ever be possible to make legal systems quantifiable enough to process with computer algorithms? What technologies would have to be in place before this is possible? Are there any existing technologies that are already trying to accomplish this? Out of curiosity, I downloaded the text for laws in my local municipality, and tried applying some simple NLP tricks to extract rules from sentences. I had mixed results. Some sentences were very explicit (e.g. "Cars may not be left in the park overnight"), but other sentences seemed hopelessly vague (e.g. "The council's purpose is to ensure the well-being of the community"). I apologize if this is too open-ended a topic, but I've often wondered what society would look like if legal systems were based on less ambiguous language. Lawyers, and the legal process in general, are so expensive because they have to manually process a complex set of rules codified in ambiguous legal texts. If this system could be represented in software, this huge expense could potentially be eliminated, making the legal system more accessible for everyone.

    Read the article

  • "Anagram solver" based on statistics rather than a dictionary/table?

    - by James M.
    My problem is conceptually similar to solving anagrams, except I can't just use a dictionary lookup. I am trying to find plausible words rather than real words. I have created an N-gram model (for now, N=2) based on the letters in a bunch of text. Now, given a random sequence of letters, I would like to permute them into the most likely sequence according to the transition probabilities. I thought I would need the Viterbi algorithm when I started this, but as I look deeper, the Viterbi algorithm optimizes a sequence of hidden random variables based on the observed output. I am trying to optimize the output sequence. Is there a well-known algorithm for this that I can read about? Or am I on the right track with Viterbi and I'm just not seeing how to apply it?

    Read the article

  • problem with hierarchical clustering in Python

    - by user248237
    I am doing a hierarchical clustering a 2 dimensional matrix by correlation distance metric (i.e. 1 - Pearson correlation). My code is the following (the data is in a variable called "data"): from hcluster import * Y = pdist(data, 'correlation') cluster_type = 'average' Z = linkage(Y, cluster_type) dendrogram(Z) The error I get is: ValueError: Linkage 'Z' contains negative distances. What causes this error? The matrix "data" that I use is simply: [[ 156.651968 2345.168618] [ 158.089968 2032.840106] [ 207.996413 2786.779081] [ 151.885804 2286.70533 ] [ 154.33665 1967.74431 ] [ 150.060182 1931.991169] [ 133.800787 1978.539644] [ 112.743217 1478.903191] [ 125.388905 1422.3247 ]] I don't see how pdist could ever produce negative numbers when taking 1 - pearson correlation. Any ideas on this? thank you.

    Read the article

  • How do polymorphic inline caches work with mutable types?

    - by kingkilr
    A polymorphic inline cache works by caching the actual method by the type of the object, in order to avoid the expensive lookup procedures (usually a hashtable lookup). How does one handle the type comparison if the type objects are mutable (i.e. the method might be monkey patched into something different at run time). The one idea I've come up with would be a "class counter" that gets incremented each time a method is adjusted, however this seems like it would be exceptionally expensive in a heavily monkey patched environ since it would kill all the PICs for that class, even if the methods for them weren't altered. I'm sure there must be a good solution to this, as this issue is directly applicable to Javascript and AFAIK all 3 of the big JS VMs have PICs (wow acronym ahoy).

    Read the article

  • Multiplying Block Matrices in Numpy

    - by Ada Xu
    Hi Everyone I am python newbie I have to implement lasso L1 regression for a class assignment. This involves solving a quadratic equation involving block matrices. minimize x^t * H * x + f^t * x where x 0 Where H is a 2 X 2 block matrix with each element being a k dimensional matrix and x and f being a 2 X 1 vectors each element being a k dimension vector. I was thinking of using nd arrays. such that np.shape(H) = (2, 2, k, k) np.shape(x) = (2, k) But I figured out that np.dot(X, H) doesn't work here. Is there an easy way to solve this problem? Thanks in advance.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >