Search Results

Search found 4125 results on 165 pages for 'hash cluster'.

Page 131/165 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Should I learn Haskell or F# if I already know OCaml?

    - by Unknown
    I am wondering if I should continue to learn OCaml or switch to F# or Haskell. Here are the criteria I am most interested in: Longevity Which language will last longer? I don't want to learn something that might be abandoned in a couple years by users and developers. Will Inria, Microsoft, University of Glasgow continue to support their respective compilers for the long run? Practicality Articles like this make me afraid to use Haskell. A hash table is the best structure for fast retrieval. Haskell proponents in there suggest using Data.Map which is a binary tree. I don't like being tied to a bulky .NET framework unless the benefits are large. I want to be able to develop more than just parsers and math programs. Well Designed I like my languages to be consistent. Please support your opinion with logical arguments and citations from articles. Thank you.

    Read the article

  • jQuery code works in Firebug, but not on its own

    - by Ben
    I'm having quite the brainbuster of an issue right now. I am trying to change a link's hash tag at the end from "respond" to "comments" with jQuery. I have a simple script that should do this, however it does not work. The link does not change. However, Firebug shows no errors and when I run the code in Firebug's console, it works just as I intend. Why doesn't this work on its own? Does anybody have a solution, I'm at my wits edge with this one. (function ($) { $(document).ready(function() { $("a[href$='respond']").each(function() { $(this).attr("href", $(this).attr('href').replace("respond", "comments")); }); }); })(jQuery.noConflict()); Thanks so much, I know this might be a pain to test but I'm really thankful.

    Read the article

  • How do I choose a database?

    - by liamzebedee
    I need a comparison table of some sort for database varieties (MySQL, SQLite etc.). I can't find one. My use case is, I am implementing storage of objects in a distributed hash table. I need a database solution that is: Fast for sorting Simplistic (no users, preferably no additional structures like multiple tables etc.) Concurrent (if possible) Multi-platform File based (not stored in memory primarily) Centralized I will be programming in Go. As I understand, I believe I need what is called a Document Orientated Database, because I am storing objects, identified by keys. EDIT: While I am implementing a DHT, I will also be storing metadata about the objects, such as access counts etc. It would also be preferable to have TLL (time to live)

    Read the article

  • Horizontal Scaling of Tomcat in Microsoft Azure

    - by Fabe
    Hey everyone, I am working on this quiet a while, but still no conclustion. I want to do horizontal scaling of Tomcat instances in Microsoft Azure (1,2,3,... Tomcat instances for one service). I read lots of articles about session replication, clustering,... with Tomcat. Since Azure does not support Multicasts, there is no easy way to cluster Tomcat. Also sticky sessions is no options, because Azure does round robin load balancing. Setting up two services - one with Terracotta or Apache mod_jk - and the other with Tomcat instances seems overkill for me (if even doable)... Is this even possible? Thanks in advance for reading and answering my question. Every comment/idea is highly appreciated.

    Read the article

  • Obtain patterns in one file from another using ack or awk or better way than grep?

    - by Rock
    Is there a way to obtain patterns in one file (a list of patterns) from another file using ack as the -f option in grep? I see there is an -f option in ack but it's different with the -f in grep. Perhaps an example will give you a better idea. Suppose I have file1: file1: a c e And file2: file2: a 1 b 2 c 3 d 4 e 5 And I want to obtain all the patterns in file1 from file2 to give: a 1 c 3 e 5 Can ack do this? Otherwise, is there a better way to handle the job (such like awk or using hash) because I have millions of records in both files and really need an efficient way to complete? Thanks!

    Read the article

  • What's a good algorithm for searching arrays N and M, in order to find elements in N that also exist

    - by GenTiradentes
    I have two arrays, N and M. they are both arbitrarily sized, though N is usually smaller than M. I want to find out what elements in N also exist in M, in the fastest way possible. To give you an example of one possible instance of the program, N is an array 12 units in size, and M is an array 1,000 units in size. I want to find which elements in N also exist in M. (There may not be any matches.) The more parallel the solution, the better. I used to use a hash map for this, but it's not quite as efficient as I'd like it to be. Typing this out, I just thought of running a binary search of M on sizeof(N) independent threads. (Using CUDA) I'll see how this works, though other suggestions are welcome.

    Read the article

  • J2EE/EJB + service locator: is it safe to cache EJB Home lookup result ?

    - by Guillaume
    In a J2EE application, we are using EJB2 in weblogic. To avoid losing time building the initial context and looking up EJB Home interface, I'm considering the Service Locator Pattern. But after a few search on the web I found that event if this pattern is often recommended for the InitialContext caching, there are some negative opinion about the EJB Home caching. Questions: Is it safe to cache EJB Home lookup result ? What will happen if one my cluster node is no more working ? What will happen if I install a new version of the EJB without refreshing the service locator's cache ?

    Read the article

  • Fastest/One-liner way to list attr_accessors in Ruby?

    - by viatropos
    What's the shortest, one-liner way to list all methods defined with attr_accessor? I would like to make it so, if I have a class MyBaseClass, anything that extends that, I can get the attr_accessor's defined in the subclasses. Something like this: class MyBaseClass < Hash def attributes # ?? end end class SubClass < MyBaseClass attr_accessor :id, :title, :body end puts SubClass.new.attributes.inspect #=> [id, title, body] What about to display just the attr_reader and attr_writer definitions?

    Read the article

  • FlockDB - What is it? And best cases for it uses.

    - by Guru
    Just came across FlockDB graph database. Details at github /flockDB. Twitter claims it uses FlockDB for the following: Twitter runs FlockDB on a large cluster of machines. we use it to store social graphs (who follows whom, who blocks whom) and secondary indices at twitter. At first glance, setup and trying it doesn't look straight forward. Have anyone already used it / setup this? If so, please answer the following general queries. What kind of applications is it better suited for? (Twitter claims it is simple and very rough, it remains to see what it meant though) How is FlockDB better than other graph db / noSQL db. Have you setup FlockDB, used it for a application? Early advices any? Note: I am evaluating the FlockDB and other graph databases mainly for learning them. Perhaps, I will build an application for that.

    Read the article

  • htaccess redirect conditions

    - by user271619
    I figured out how to redirect someone, if they happen across one particular filename: Redirect /index.php http://www.website.com/#myaccount As you can see, I'm pretty much redirecting that visitor to the same page, which doesn't work. It's an endless look, regardless of the slight/minuscule change. I want to force someone to see a part of the page, by adding the hash. (it's a little weird, I know) I'm guessing this may be a time to use regex in the htaccess file. But I thought I'd ask if there's a simpler way to do this from the htaccess file.

    Read the article

  • What's the fastest way to compare two objects in PHP?

    - by johnnietheblack
    Let's say I have an object - a User object in this case - and I'd like to be able to track changes to with a separate class. The User object should not have to change it's behavior in any way for this to happen. Therefore, my separate class creates a "clean" copy of it, stores it somewhere locally, and then later can compare the User object to the original version to see if anything changed during its lifespan. Is there a function, a pattern, or anything that can quickly compare the two versions of the User object? Option 1 Maybe I could serialize each version, and directly compare, or hash them and compare? Option 2 Maybe I should simply create a ReflectionClass, run through each of the properties of the class and see if the two versions have the same property values? Option 3 Maybe there is a simple native function like objects_are_equal($object1,$object2);? What's the fastest way to do this?

    Read the article

  • Java: how to represent graphs?

    - by Rosarch
    I'm implementing some algorithms to teach myself about graphs and how to work with them. What would you recommend is the best way to do that in Java? I was thinking something like this: public class Vertex { private ArrayList<Vertex> outnodes; //Adjacency list. if I wanted to support edge weight, this would be a hash map. //methods to manipulate outnodes } public class Graph { private ArrayList<Vertex> nodes; //algorithms on graphs } But I basically just made this up. Is there a better way? Also, I want it to be able to support variations on vanilla graphs like digraphs, weighted edges, multigraphs, etc.

    Read the article

  • Plotting Tweets from DB in Ruby, grouping by hour.

    - by plotti
    Hey guys I've got a couple of issues with my code. I was wondering that I am plotting the results very ineffectively, since the grouping by hour takes ages the DB is very simple it contains the tweets, created date and username. It is fed by the twitter gardenhose. Thanks for your help ! require 'rubygems' require 'sequel' require 'gnuplot' DB = Sequel.sqlite("volcano.sqlite") tweets = DB[:tweets] def get_values(keyword,tweets) my_tweets = tweets.filter(:text.like("%#{keyword}%")) r = Hash.new start = my_tweets.first[:created_at] my_tweets.each do |t| hour = ((t[:created_at]-start)/3600).round r[hour] == nil ? r[hour] = 1 : r[hour] += 1 end x = [] y = [] r.sort.each do |e| x << e[0] y << e[1] end [x,y] end keywords = ["iceland", "island", "vulkan", "volcano"] values = {} keywords.each do |k| values[k] = get_values(k,tweets) end Gnuplot.open do |gp| Gnuplot::Plot.new(gp) do |plot| plot.terminal "png" plot.output "volcano.png" plot.data = [] values.each do |k,v| plot.data << Gnuplot::DataSet.new([v[0],v[1]]){ |ds| ds.with = "linespoints" ds.title = k } end end end

    Read the article

  • I want absolute atomicity on a single couchdb instance (insert, fail if already existing)

    - by MatternPatching
    I've come to really love the couchdb style of organizing and updating data, but there are a few situations where I really need to be able to create an entry and determine if an equivalent entry is already in existence before returning to the user. The only situation that this is absolutely necessary for my application is user registration. I'm fine with having all user registration writes go to a particular, designated couchdb instance known as the "registration-instance". I want to hash the user_id into some _id to use. Then execute a put with this _id, but fail if the _id is already inserted. I need to return to the user that the user name is already reserved, and I cannot detect the conflict later and resolve it at that point, because the user would be under the impression that they had reserved the user name. I don't see why couchdb couldn't provide some way to do this, under the assumption that you designate that inserts for a particular "type" of document always are routed to a particular instance.

    Read the article

  • kmeans based on mapreduce by python

    - by user3616059
    I am going to write a mapper and reducer for the kmeans algorithm, I think the best course of action to do is putting the distance calculator in mapper and sending to reducer with the cluster id as key and coordinates of row as value. In reducer, updating the centroids would be performed. I am writing this by python. As you know, I have to use Hadoop streaming to transfer data between STDIN and STOUT. according to my knowledge, when we print (key + "\t"+value), it will be sent to reducer. Reducer will receive data and it calculates the new centroids but when we print new centroids, I think it does not send them to mapper to calculate new clusters and it just send it to STDOUT and as you know, kmeans is a iterative program. So, my questions is whether Hadoop streaming suffers of doing iterative programs and we should employ MRJOB for iterative programs?

    Read the article

  • PHP Variable Encryption

    - by NCoder
    I have the following code that creates an encryption in PHP: $password = "helloworld"; $passwordupper = strtoupper($password); $passwordencode = mb_convert_encoding($passwordupper, 'UTF-16LE'); $passwordsha1 = hash("SHA1", $passwordencode); $passwordbase64 = base64_encode($passwordsha1); The instructions I have from the system I'm trying to connect to states: The encoding process for passwords is: first convert to uppercase, then Unicode it in little-endian UTF 16, then SHA1 it then base64 encode it. I think I'm doing something wrong in my code. Any ideas?! Thanks! NCoder

    Read the article

  • how to optimize sql server table for faster response?

    - by Thomas
    i found a in a table there are 50 thousands records and it takes one minute when we fetch data from sql server table just by issuing a sql. there are one primary key that means a already a cluster index is there. i just do not understand why it takes one minute. beside index what are the ways out there to optimize a table to get the data faster. in this situation what i need to do for faster response. also tell me how we can write always a optimize sql. please tell me all the steps in detail for optimization. thanks.

    Read the article

  • Prevent Users from Performing an Action Twice

    - by TheOnly92
    We have some problems with users performing a specific action twice, we have a mechanism to ensure that users can't do it but somehow it still happens. Here is how our current mechanism works: Client side: The button will be disabled after 1 click. Server side: We have a key hash in the URL which will be checked against the key stored in SESSIONS, once it matches, the key is deleted. Database side: Once the action is performed, there is a field to be flagged indicating the user has completed the action. However, with all these measures, still there are users able to perform the action twice, are there any more safer methods?

    Read the article

  • Mongoose Not Creating Indexes

    - by wintzer
    I have been trying all afternoon to get my node.js application to create MongoDB indexes properly. I am using the Mongoose ODM and in my schema definition below I have the username field set to a unique index. The collection and document all get created properly, it's just the indexes that aren't working. All the documentation says that the ensureIndex command should be run at startup to create any indexes, but none are being made. I'm using MongoLab for hosting if that matters. I have also repeatedly dropped the collection. Please tell me what I'm doing wrong. var schemaUser = new mongoose.Schema({ username: {type: String, index: { unique: true }, required: true}, hash: String, created: {type: Date, default: Date.now} }, { collection:'Users' }); var User = mongoose.model('Users', schemaUser); var newUser = new Users({username:'wintzer'}) newUser.save(function(err) { if (err) console.log(err); });

    Read the article

  • Dynamic Custom Fields for Data Model

    - by Jerry Deng
    I am in the process of creating a dynamic database where user will be able to create resource type where he/she can add custom fields (multiple texts, strings, and files) Each resource type will have the ability to display, import, export its data; I've been thinking about it and here are my approaches. I would love to hear what do you guys think. Ideas: just hashing all the custom data in a data field (pro: writing is easier, con: reading back out may be harder); children fields (the model will have multiple fields of strings, fields of text, and fields for file path); fixed number of custom fields in the same table with a key mapping data hash stored in the same row; Non-SQL approach, but then the problem would be generating/changing models on the fly to work with different custom fields;

    Read the article

  • Constants in Model and View with select option and show view

    - by caplod
    i have a some values ,that i use in my model as constants. class Animal < ActiveRecord::Base LEGS = {:vierbeiner => 4, :zweibeiner => 2 } end in the form (formtastic) for the collection i use: <%= f.input :legs, :as => :select, :collection => Animal::LEGS => but how do i format the show view so instead showing me the number , the key of the hash? in show view i have: <p> <strong>Legs:</strong> <%=h @animal.legs %> </p>

    Read the article

  • Use C function in C++ program; "multiply-defined" error

    - by eom
    I am trying to use this code for the Porter stemming algorithm in a C++ program I've already written. I followed the instructions near the end of the file for using the code as a separate module. I created a file, stem.c, that ends after the definition and has extern int stem(char * p, int i, int j) ... It worked fine in Xcode but it does not work for me on Unix with gcc 4.1.1--strange because usually I have no problem moving between the two. I get the error ld: fatal: symbol `stem(char*, int, int)' is multiply-defined: (file /var/tmp//ccrWWlnb.o type=FUNC; file /var/tmp//cc6rUXka.o type=FUNC); ld: fatal: File processing errors. No output written to cluster I've looked online and it seems like there are many things I could have wrong, but I'm not sure what combination of a header file, extern "C", etc. would work.

    Read the article

  • Where can I find an array of the unassigned Unicode code points for a particular block?

    - by gitparade
    At the moment, I'm writing these arrays by hand. For example, the Miscellaneous Mathematical Symbols-A block has an entry in hash like this: my %symbols = ( ... miscellaneous_mathematical_symbols_a => [(0x27C0..0x27CA), 0x27CC, (0x27D0..0x27EF)], ... ) The simpler, 'continuous' array miscellaneous_mathematical_symbols_a => [0x27C0..0x27EF] doesn't work because Unicode blocks have holes in them. For example, there's nothing at 0x27CB. Take a look at the code chart [PDF]. Writing these arrays by hand is tedious, error-prone and a bit fun. And I get the feeling that someone has already tackled this in Perl!

    Read the article

  • What is a simple C library for a set of integer sets?

    - by conradlee
    I've got to modify a C program and I need to include a set of unsigned integer sets. That is, I have millions of sets of integers (each of these integer sets contains between 3 and 100 integers), and I need to store these in some structure, lets call it the directory, that can in logarithmic time tell me whether a given integer set already exists in the directory. The only operations that need to be defined on the directory is lookup and insert. This would be easy in languages with built-in support for useful data structures, but I'm a foreigner to C and looking around on Google did (surprisingly) not answer my question satisfactorily. This project looks about right: http://uthash.sourceforge.net/ but I would need to come up with my own hash key generator. This is a standard, simple problem, so I hope there is a standard and simple solution.

    Read the article

  • Not a statement?

    - by abelenky
    I have a simple little code fragment that is frustrating me: HashSet<long> groupUIDs = new HashSet<long>(); groupUIDs.Add(uid)? unique++ : dupes++; At compile time, it generates the error: Only assignment, call, increment, decrement, and new object expressions can be used as a statement HashSet.Add is documented to return a bool, so the ternary (?) operator should work, and this looks like a completely legitimate way to track the number of unique and duplicate items I add to a hash-set. When I reformat it as a if-then-else, it works fine. Can anyone explain the error, and if there is a way to do this as a simple ternary operator?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >