Search Results

Search found 6716 results on 269 pages for 'distributed algorithm'.

Page 194/269 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • Solution for distributing MANY simple network tasks?

    - by EmpireJones
    I would like to create some sort of a distributed setup for running a ton of small/simple REST web queries in a production environment. For each 5-10 related queries which are executed from a node, I will generate a very small amount of derived data, which will need to be stored in a standard relational database (such as PostgreSQL). What platforms are built for this type of problem set? The nature, data sizes, and quantities seem to contradict the mindset of Hadoop. There are also more grid based architectures such as Condor and Sun Grid Engine, which I have seen mentioned. I'm not sure if these platforms have any recovery from errors though (checking if a job succeeds). What I would really like is a FIFO type queue that I could add jobs to, with the end result of my database getting updated. Any suggestions on the best tool for the job?

    Read the article

  • What files to be included under VSS 6.0

    - by kheat
    For our .net 3.5 web project, what are the files which needs to be included under VSS 6.0? We have a distributed team of three vendors working on separate modules of our .net portal and all of them maintain their own setup and during release they send across the final build. No surprises that this has caused much headache and we have decided that we will keep this environment under our control and checkout the files when required. This is a multi-part questionnaire and to clear some basics first, we would like to know which are the important files to be kept under VSS6.0. Yes we know VSS 6.0 is outdated but we are playing a catchup game and till we move either to TFS or Subversion( atleast six months down the line) we need a VSS strategy. TIA

    Read the article

  • Download and replace Android resource files

    - by Casebash
    My application will have some customisation for each company that uses it. Up until now, I have been loading images and strings from resource files. The idea is that the default resources will be distributed with the application and company specific resources will be loaded from our server after they click on a link from an email to launch the initialisation intent. Does anyone know how to replace resource files? I would really like to keep using resource files to avoid rewriting a lot of code/XML. I would distribute the application from our own server, rather than through the app store, so that we could have one version per company, but unfortunately this will give quite nasty security warnings that would concern our customers.

    Read the article

  • As a programmer what single discovery has given you the greatest boost in productivity?

    - by ChrisInCambo
    This question has been inspired by my recent discovery/adoption of distributed version control. I started using it (mercurial) just because I liked the idea of still being able to make commits at times when I couldn't connect to the central server. I never expected it would give me a large boost in general productivity, but a pleasant side effect I discovered was that making a new clone every time I started a new task and giving that clone a descriptive folder name is extremely effective at keeping me on task resulting is a noticeable productivity increase. So as a programmer what single discovery has given you the greatest boost in productivity? Extra respect for answers which involve tools or practices that aren't so obvious from the outside!

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • How do I choose a database?

    - by liamzebedee
    I need a comparison table of some sort for database varieties (MySQL, SQLite etc.). I can't find one. My use case is, I am implementing storage of objects in a distributed hash table. I need a database solution that is: Fast for sorting Simplistic (no users, preferably no additional structures like multiple tables etc.) Concurrent (if possible) Multi-platform File based (not stored in memory primarily) Centralized I will be programming in Go. As I understand, I believe I need what is called a Document Orientated Database, because I am storing objects, identified by keys. EDIT: While I am implementing a DHT, I will also be storing metadata about the objects, such as access counts etc. It would also be preferable to have TLL (time to live)

    Read the article

  • How to minimize the amount of place used by GPL copyright notice?

    - by Lukasz Lew
    Gnu GPL page advocates a following header in each file of GPL project: This file is part of Foobar. Foobar is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. Foobar is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Foobar. If not, see http://www.gnu.org/licenses/. I find this an over kill. Can't it be shorter and somehow refer to COPYING or LICENCE file?

    Read the article

  • Including external C++ libraries in version control

    - by m0tive
    I'm currently starting a project which is going to be developed on a few different computer and I'm keeping in sync with bzr. In the project I'm using a couple of 3rd party libraries, like SDL. In the past I've just pushed a copy of the compiled library to my version control, but that usually seems to massively inflate the size of the branch and generally seem like a bad idea. Is that the normal practice, just pushing the required libraries, or is there a better way of added libraries to distributed version control like bzr or git? (I know on svn you can use svn:external to do something similar to this)

    Read the article

  • hg unshelve not working

    - by shanebonham
    Our team is just getting started with Mercurial. One of the first things we've started to play with is hg shelve. Locally, I have no problem shelving changes. It all works perfectly from what I can tell. However, when I try to unshelve, I get the restoring backup files message, but when I run hg diff, there are no changes, and my changes are missing from the code. If i do hg unshelve -i I can see the diff, but again, trying to unshelve seems to have no effect. I've been trying to test it with some very simple changes that shouldn't be a problem in terms of conflicts, e.g. adding a test comment. I should note that I've tried hg unshelve -f after which it says unshelve completed but again, my changes are not restored. Any ideas what I am doing wrong? If it matters: Mercurial Distributed SCM (version 1.5.1+20100405)

    Read the article

  • c++ programming for clusters and HPC

    - by Abruzzo Forte e Gentile
    HI All I need to write a scientific application in C++ doing a lot of computations and using a lot of memory. I have part of the job but due to high requirements in terms of resources I was thinking to start moving to OpenMPI. Before doing that I have a simple curiosity: If I understood the principle of OpenMPI is the developer that has the task of splitting the jobs over different nodes calling SEND and RECEIVE based on node available at that time. Do you know if it does exist some library or OS or whatever that has this capability letting my code reamain as it is now? Basically something that connects all computers and let share as one their memory and CPU? I am a bit confused because of the high material available on the topic. Should I look at cloud computing? or Distributed Shared Memory? Can you help me or address me a bit? Thanks

    Read the article

  • Help regarding no sql databases like hadoop, hbase etc

    - by user560370
    I am new to the distributed NoSQL databases like Hadoop, Cassandra, etc. I have few questions for which I seek an expert advice: Can you list problems/challenges one will generally face when making a shift from the present conventional database like MySQL to these large cluster-based databases? What are the difficulties, if any, when one needs to adapt to a newer version of these open source projects? Can you list out the things which are generally stored/kept in memcached for fast rendering of the page? How can I understand the source code of open-source projects so that I can build on it and maybe give back to the community? Above questions may sound to be idiotic and basic but please it's a request for the experts to answer the above questions in detailed and to best of their abilities.

    Read the article

  • who delete my files?

    - by akalter
    I have some linux server. on two of our server we have mysql. we have daily backup on both machine. but the script different. i saw both scripts. on one of them i saw the "delete older files" algorithm, but in the other this is happening but not from the script. i trying to know who dletes my files, because of that i want to use same script on both machine because of that in the script with the deletion i also copy the files to the another server, and i want to do that in both servers. Who have an idea who delete my older backups? Thank you!

    Read the article

  • How do I protect python code?

    - by Jordfräs
    I am developing a piece of software in python that will be distributed to my employer's customers. My employer wants to limit the usage of the software with a time restricted license file. If we distribute the .py files or even .pyc files it will be easy to (decompile), and remove the code that checks the license file. Another aspect is that my employer do not want the code to be read by our customers, fearing that the code may be stolen or at least the "novel ideas". Is there a good way to handle this problem? Preferably with an off-the-shelf solution. The software will run on Linux systems (so I don't think py2exe will do the trick)

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • C# similar technologies and books in Java

    - by MigNix
    Hi, I know this can look like dublicate of other discussions and I have already reviewed them, but they are different. I want to learn Java and I want to know are there any books similar to those that I have seen for C#. Here is the list: C# 4.0 in a Nutshell: The Definitive Reference Probably Java in a Nutshell :) Professional C# 4.0 and .NET 4 Pro C# 2010 and the .NET 4 Platform Accelerated C# 2010 Programming WCF Services Essential Windows Communication Foundation (WCF): For .NET Framework 3.5 C# in Depth: What you need to master C# 2 and 3 CLR via C# Professional ADO.NET 3.5 with LINQ and the Entity Framework Professional Enterprise .NET Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries Mainly I am interested in Java 6 and want to know about JVM , Cuncurency, Data access and distributed application technologies. I have seen there are some books on JVM like Inside the Java 2 Virtual Machine also some free books on official web site. Also I found concurrent programming in Java. But may be you can suggest any others. Thank you

    Read the article

  • Random numbers from binomial distribution

    - by Sarah
    I need to generate quickly lots of random numbers from binomial distributions for dramatically different trial sizes (most, however, will be small). I was hoping not to have to code an algorithm by hand (see, e.g., this related discussion from November), because I'm a novice programmer and don't like reinventing wheels. It appears Boost does not supply a generator for binomially distributed variates, but TR1 and GSL do. Is there a good reason to choose one over the other, or is it better that I write something customized to my situation? I don't know if this makes sense, but I'll alternate between generating numbers from uniform distributions and binomial distributions throughout the program, and I'd like for them to share the same seed and to minimize overhead. I'd love some advice or examples for what I should be considering.

    Read the article

  • Global variables in hadoop.

    - by Deepak Konidena
    Hi, My program follows a iterative map/reduce approach. And it needs to stop if certain conditions are met. Is there anyway i can set a global variable that can be distributed across all map/reduce tasks and check if the global variable reaches the condition for completion. Something like this. While(Condition != true){ Configuration conf = getConf(); Job job = new Job(conf, "Dijkstra Graph Search"); job.setJarByClass(GraphSearch.class); job.setMapperClass(DijkstraMap.class); job.setReducerClass(DijkstraReduce.class); job.setOutputKeyClass(IntWritable.class); job.setOutputValueClass(Text.class); } Where condition is a global variable that is modified during/after each map/reduce execution.

    Read the article

  • who deleted my files?

    - by akalter
    I have some linux servers. On two of our server we have MySQL. We have daily backup on both machine. But the scripts are different. I saw both scripts. On one of them I saw the "delete older files" algorithm, but in the other this is happening but not from the script. I am trying to discover who deletes my files, because of that I want to use same script on both machine because of that in the script with the deletion I also copy the files to the another server, and I want to do that in both servers. Who have an idea who deleted my older backups? Thank you!

    Read the article

  • How messages flows between computers connected with Internet or LAN ?

    - by Praveen
    Hi All, I have been doing Windows programming in .Net since last two years. Now I am shifting to web programming so I just stuck in understanding the fundamentals of web programming, after googling I came to StackOverflow to learn from all of you great guys. My confusion is about how messages flow between systems in distributed enviornment ? I mean suppose I want to send a message "Hello" to a system connected to LAN or Internet, then what will be the steps taken to send the message. Second thing is suppose my system is "A" and I wana send message to system "B" which is connected via a wire, so how the message flows on wire and how system "B" reads it from the wire ? Please someone explain me in a layman terms. Thank you all in advance.

    Read the article

  • Microphone input

    - by George
    I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing. I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level. Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software? Thanks, /George

    Read the article

  • Should we be giving the client's management team direct access to our git hub repository so that the

    - by SharePoint Newbie
    Hi, We are presently working for a client who is new to working with distributed teams. We have teams spread across India and the UK. Although we have decent project tracking tools (Mingle), would it be a good idea to the give the PM at the client access to our git hub repo. Would this be make it easier for them (see what the devs are working on and an insight into what the team has been developing). I agree that noot all commit messages would make sense to them but would this be a good way to boost their confidence in what we are doing? They already can check out our fortnightly releases on our QA and UA environments, but this still is behind dev by 5-6 days. Also, is there any reporting for git hub which makes it easier for PM types to make sense of it all? Thanks

    Read the article

  • Java, merging two arrays evenly

    - by user2435044
    What would be the best way to merge two arrays of different lengths together so they are evenly distributed in the new array? Say I have the following arrays String[] array1 = new String[7]; String[] array2 = new String[2]; String[] mergedArray = new String[array1.length + array2.length]; I would want mergedArray to have the following elements array1 array1 array1 array2 array1 array1 array1 array2 array1 but if I were to change the size of the arrays to String[] array1 = new String[5]; String[] array2 = new String[3]; String[] mergedArray = new String[array1.length + array2.length]; then I would want it to be array1 array2 array1 array2 array1 array2 array1 array1 basically if it can be helped each array2 element shouldn't be touching each other; exception if array2 has a size larger than array1.

    Read the article

  • PHP CPU utilization limit

    - by knightrider
    I have done some research on the net regarding the problem. My questions is NOT how to reduce cpu utilization by improving algorithm or improving the performance by using multitasking or limiting CPU per system user. I have a website where user logs in does some processing and logout. The site uses linux server, php and apache. The problem is that I cant control the amount of CPU allocated to each user. ie I want give a guarantee that a user will get say atleast 5% of CPU (assume total number of users is less than 20). How can I do this? Any solution (A php code, apache server settings, or any out of box soln) is welcomed. Thankyou very much for reading this :)

    Read the article

  • Force memcached to write to all servers in pool

    - by Industrial
    Hi everyone, I have thought a bit on how to make sure that a particular key is distributed to ALL memcached servers in a pool. My current, untested solution is to make another instance of memcached, something like this: $cluster['local'] = array('host' => '192.168.1.1', 'port' => '11211', 'weight' => 50); foreach ($this->cluster() as $cluster) { @$this->tempMemcache = new Memcache; @$this->tempMemcache->connect($cluster['host'], $cluster['port']); @$this->tempMemcache->set($key, $value, $this->compress, $expireTime); @$this->tempMemcache->close(); } What is common sense to do in this case, when certain keys need to be stored on ALL servers for reliability?

    Read the article

  • Correct way to protect a private API key when versioning a python application on a public git repo

    - by systempuntoout
    I would like to open-source a python project on Github but it contains an API key that should not be distributed. I guess there's something better than removing the key each time a "push" is committed to the repo. Imagine a simplified foomodule.py : import urllib2 API_KEY = 'XXXXXXXXX' urllib2.urlopen("http://example.com/foo?id=123%s" % API_KEY ).read() What i'm thinking is: Move the API_KEY in a second key.py module importing it on foomodule.py; i would then add key.py on .gitignore file. Same as 1 but using ConfigParser Do you know a good programmatic way to handle this scenario?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >