Search Results

Search found 4151 results on 167 pages for 'cat man do'.

Page 138/167 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • FFMpeg Error av_interleaved_write_frame():

    - by rajaneesh
    this my code . after running php code FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib --shlibdir=/usr/lib --mandir=/usr/share/man --incdir=/usr/include --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:05:03, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) - 25.00 (25/1) Input #0, flv, from 'demo.flv': Duration: 00:00:30.83, start: 0.000000, bitrate: 546 kb/s Stream #0.0: Video: h264, yuv420p, 640x360 [PAR 1:1 DAR 16:9], 546 kb/s, 25 tbr, 1k tbn, 50 tbc Stream #0.1: Audio: aac, 44100 Hz, stereo, s16 Output #0, image2, to 'demo.jpg': Stream #0.0: Video: mjpeg, yuvj420p, 640x360 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 1 tbc Stream mapping: Stream #0.0 - #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted.

    Read the article

  • Are we DELPHI, VCL or Pascal programmers?

    - by José Eduardo
    i´ve been a delphi database programmer since D2. Now i´m facing some digital imaging and 3D challenges that make me to start study OpenGL, DirectX, Color Spaces and so on. I´m really trying but nobody seems to use Delphi for this kind of stuff, just the good-old-paycheck Database programming. ok, i know that we have some very smart guys behind some clever components, some of this open-source. Is there any PhotoShop, Blender, Maya, Office, Sonar, StarCraft, Call of Dutty written in Delphi? Do i have to learn C++ to have access to zillions of books about that kind of stuff? What is the fuzz/hype behind this: int *varName = &anhoterThing? Why pointers seems to be the holy graal to this apps? I´ve downloaded MSVC++ Express and start to learn some WPF and QT integration, and i think: "Man, Delphi does this kind of stuff, with less code, less headaches, since the wheels were invented" This lead my mind to the following... Do you ever tried to write a simple notepad program using just notepad and dcc32 in Pascal/Delphi? if so embarcadero could make our beloved pascal compiler free, and sell just the ide, the vcl, the customer support ... and back to the question: Are we DELPHI, VCL or Pascal programmers?

    Read the article

  • bash and flock (file lock) - Doesn't seem to be locking....

    - by Rory
    I am playing with using flock, a bash command for file locks to prevent 2 different instances of the code from running more than once. I am using this testing code: ( ( flock -x 200 ; sleep 10 ; echo "original finished" ; ) 200>./test.lock ) & ( sleep 2 ; ( flock -x -w 2 200 ; echo "a finished" ) 200>./test.lock ) & I am running 2 subshells (backgrounded). The (flock NUM; ...) NUM>FILE syntax is from flock's man page. I expect that the first subshell will get an exclusive lock on test.lock, then wait 10 seconds, then print "original finished", all the time holding the lock. The second subshell will start at more or less the same time, wait 2 seconds, then try to get a lock on test.lock, but timeout after 2 seconds. If it gets a lock, then it'll print "a finished". If it doesn't get the lock, that subshell should stop, and nothing should be printed. Since the first subshell is waiting longer, it will keep the lock for 10 seconds, so the second subshell should not get the lock, and shouldn't finish. i.e. one should see "original finished" printed and not both. What actually happens is that "a finished" is printed, then "original finished" is printed. This implies that that the second subshell is either (a) not using the same lock as the first subhsell or (b) that it fails to get the lock, but continues to execute or (c) something else. Why don't those locks work?

    Read the article

  • Which PHP library I should choose to work with CouchDB?

    - by Guss
    I want to try playing with CouchDB for a new project I'm writing (as a hobby, not part of my job). I'm well versed in PHP, but I haven't programmed with CouchDB at all, and also I have little experience with non-SQL databases. From looking at CouchDB's "Getting Started with PHP" document they recommend using a third-party library or writing your own client using their RESTful HTTP API. I think I'd rather not mess with writing protocol implementations myself at this point, but what is your experience with writing PHP to work with CouchDB? I haven't tested any of the alternatives yet, but I looked at: PHPillow : I'm interested in the way they implement ORM. I wasn't planning to do ORM, but my problem domain probably map well to that method. PHP Object Freezer: seems like a poor man's ORM - I can use it to implement an actual ORM, or just as an easy store/retrieve document API but it seems too primitive. PHP-on-Couch : Also a bit simple, but they have an interesting API for views and from the documentation it looks usable enough. PHP CouchDB Extension : From the listed options this looks like it has the best chance of making it into the PHP mainline itself, and also has the most complete API. Any opinion one wish to share on each library is welcome.

    Read the article

  • MySql scoping problem with correlated subqueries

    - by Rolf
    Hi, I'm having this Mysql query, It works: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = 37) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = 37) ) aSq ) areas FROM m3allems m WHERE m.id = 37 The result is: nom prenom categories areas Man Multi Carpentry,Paint,Walls Beirut,Baalbak,Saida It works correclty, but only when i hardcode into the query the id that I want (37). I want it to work for all entries in the m3allem table, so I try this: SELECT nom ,prenom ,(SELECT GROUP_CONCAT(category_en) FROM (SELECT DISTINCT category_en FROM categories c WHERE id IN (SELECT DISTINCT category_id FROM m3allems_to_categories m2c WHERE m3allem_id = m.id) ) cS ) categories ,(SELECT GROUP_CONCAT(area_en) FROM (SELECT DISTINCT area_en FROM areas c WHERE id IN (SELECT DISTINCT area_id FROM m3allems_to_areas m2a WHERE m3allem_id = m.id) ) aSq ) areas FROM m3allems m And I get an error: Unknown column 'm.id' in 'where clause' Why? From the MySql manual: 13.2.8.7. Correlated Subqueries [...] Scoping rule: MySQL evaluates from inside to outside. So... do this not work when the subquery is in a SELECT section? I did not read anything about that. Does anyone know? What should I do? It took me a long time to build this query... I know it's a monster query but it gets what I want in a single query, and I am so close to getting it to work! Can anyone help?

    Read the article

  • Use LaTeX Listings to correctly detect and syntax highlight embedded code of a different language in

    - by D W
    I have scripts that have one-liners or sort scripts from other languages within them. How can I have LaTeX listings detect this and change the syntax formating language withing the script? This would be especially useful for awk withing bash I believe. Bash #!/bin/bash ... # usage message to catch bad input without invoking R ... # any bash pre-processing of input ... # etc echo "hello world" R --vanilla << EOF # Data on motor octane ratings for various gasoline blends x <- c(88.5,87.7,83.4,86.7,87.5,91.5,88.6,100.3, 95.6,93.3,94.7,91.1,91.0,94.2,87.5,89.9, 88.3,87.6,84.3,86.7,88.2,90.8,88.3,98.8, 94.2,92.7,93.2,91.0,90.3,93.4,88.5,90.1, 89.2,88.3,85.3,87.9,88.6,90.9,89.0,96.1, 93.3,91.8,92.3,90.4,90.1,93.0,88.7,89.9, 89.8,89.6,87.4,88.9,91.2,89.3,94.4,92.7, 91.8,91.6,90.4,91.1,92.6,89.8,90.6,91.1, 90.4,89.3,89.7,90.3,91.6,90.5,93.7,92.7, 92.2,92.2,91.2,91.0,92.2,90.0,90.7) x length(x) mean(x);var(x) stem(x) EOF perl -n -e ' @t = split(/\t/); %t2 = map { $_ => 1 } split(/,/,$t[1]); $t[1] = join(",",keys %t2); print join("\t",@t); ' knownGeneFromUCSC.txt awk -F'\t' '{ n = split($2, t, ","); _2 = x split(x, _) # use delete _ if supported for (i = 0; ++i <= n;) _[t[i]]++ || _2 = _2 ? _2 "," t[i] : t[i] $2 = _2 }-3' OFS='\t' infile Python #!/usr/local/bin/python print "Hello World" os.system(""" VAR=even; sed -i "s/$VAR/odd/" testfile; for i in `cat testfile` ; do echo $i; done; echo "now the tr command is removing the vowels"; cat testfile |tr 'aeiou' ' ' """)

    Read the article

  • Problem Parsing JSON Result with jQuery

    - by senfo
    I am attempting to parse JSON using jQuery and I'm running into issues. In the code below, I'm using a static file, but I've also tested using an actual URL. For some reason, the data keeps coming back as null: <!DOCTYPE html> <html> <head> <title>JSON Test</title> <script src="http://code.jquery.com/jquery-latest.js"></script> <script> $.getJSON('results.json', function(data) { alert(data); // Result is always null }); </script> </head> <body> </body> </html> The JSON results look like the following: {"title":"HEALTHPOINT TYEE CAMPUS","link":"http://www.healthpointchc.org","id":"tag:datawarehouse.hrsa.gov,2010-04-29:/8357","org":"HEALTHPOINT TYEE CAMPUS","address":{"street-address":"4424 S. 188TH St.","locality":"Seatac","region":"Washington","postal-code":"98188-5028"},"tel":"206-444-7746","category":"Service Delivery Site","location":"47.4344818181818 -122.277672727273","update":"2010-04-28T00:00:00-05:00"} If I replace my URL with the Flickr API URL (http://api.flickr.com/services/feeds/photos_public.gne?tags=cat&tagmode=any&format=json&jsoncallback=?), I get back a valid JSON result that I am able to make use of. I have successfully validated my JSON at JSONLint, so I've run out of ideas as to what I might be doing wrong. Any thoughts?

    Read the article

  • What would cause native gem extensions on OS X to build but fail to load?

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • PHP gettext on Windows

    - by Axsuul
    There's some tutorials out there for gettext (w/ Poedit)... unfortunately, it's mostly for a UNIX environment. And even more unfortunate is that I am running my WAMP server on Windows XP (but I am developing for a UNIX environment) and none of the tutorials can get gettext working properly for me. From the man page (http://us3.php.net/manual/en/book.gettext.php), it appears that it's a different process on a Windows environment. I've tried out some of the solutions in the comments but I still can't get it to work! Please, I've spent many hours on this, hopefully someone can point me in the right direction to get this thing to work! (and I'm sure there are others out there who share my frustration). So far with my setup, I'm only getting output "Hello World!" whereas I should be getting the translated string. Here is my setup/code so far: <?php // test.php if (!defined('LC_MESSAGES')) { define('LC_MESSAGES', 6); } $locale = "deu_DEU"; // apparently the locales are different on a WINDOWS platform putenv("LC_ALL=$locale"); setlocale(LC_ALL, $locale); bindtextdomain("greetings", ".\locale"); textdomain("greetings"); echo _("Hello World"); ?> Folder structure root: C:\Program Files\WampServer 2\www test.php: C:\Program Files\WampServer 2\www\site .po: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.po .mo: C:\Program Files\WampServer 2\www\site\locale\deu_DEU\LC_MESSAGES\greetings.mo Please advise! Thanks for your time :)

    Read the article

  • Social Network News Feed Database & Design

    - by pws5068
    I'm designing a News Feed system using PHP/MySQL similar to facebook's. I have asked a similar question before but now I've changed the design and I'm looking for feedback. Example Notifications: User_A commented on User_B's new album. "Hey man nice picture!" User_B added a new Photo to [his/her] profile. [show photo thumbnail] Initially, I implemented this using excessive columns for Obj1:Type1 | Obj2:Type2 | etc.. Now the design is set up using a couple special keywords, and actor/receiver relationships. My database is designed for efficiency - using a table of messages joined on a table containing userid,actionid,receiverid,receiverObjectTypeID, Here's a condensed version of what it will look like once joined: News_ID | User_ID | Message | Timestamp 2643 A %a commented on %o's new %r. SomeTimestamp 2644 B %a added a new %r to [his/her] profile. SomeTimestamp %a = the User_ID of the person doing the action %r = the receiving object %o = the owner of the receiving object (for example the owner of the album) (NULL if %r is a user) Questions: Is this a smart (efficient/scalable) way to move forward? How can I show messages like: "User_B added 4 new photos to his profile."?

    Read the article

  • grep 5 seconds of input from the serial port inside a shell-script

    - by pica
    I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like: ttylog -b 115200 -d /dev/ttyS0 I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example: touch tempFile ttylog -b 115200 -d /dev/ttyS0 >> tempFile & serialPID=$! sleep 5 #kill ${serialPID} #does not work, gets wrong PID killall ttylog cat tempFile The file gets created but never filled with any data. I can also replace the ttylog line with: ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile & In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh). I have no idea what's going on here. It seems to be a failure of redirection within my script. Am I on the right track? Is there a better way to sample 5 seconds of the serial port?

    Read the article

  • C: lseek() related question.

    - by Andrei Ciobanu
    I want to write some bogus text in a file ("helloworld" text in a file called helloworld), but not starting from the beginning. I was thinking to lseek() function. If I use the following code: #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <sys/types.h> #include <stdlib.h> #include <stdio.h> #define fname "helloworld" #define buf_size 16 int main(){ char buffer[buf_size]; int fildes, nbytes; off_t ret; fildes = open(fname, O_CREAT | O_TRUNC | O_WRONLY, S_IRUSR | S_IWUSR); if(fildes < 0){ printf("\nCannot create file + trunc file.\n"); } //modify offset if((ret = lseek(fildes, (off_t) 10, SEEK_END)) < (off_t) 0){ fprintf(stdout, "\nCannot modify offset.\n"); } printf("ret = %d\n", (int)ret); if(write(fildes, fname, buf_size) < 0){ fprintf(stdout, "\nWrite failed.\n"); } close(fildes); return (0); } , it compiles well and it runs without any apparent errors. Still if i : cat helloworld The output is not what I expected, but: helloworld Can Where is "Can" comming from, and where are my empty spaces ? Should i expect for "zeros" instead of spaces ? If i try to open helloworld with gedit, an error occurs, complaining that the file character encoding is unknown.

    Read the article

  • Connecting to SFTP service via Java Runtime process

    - by ohseekay
    Under my project, I have a Java class file, inside of which I have a routine which executes the following external SFTP script file: #!/bin/sh echo "cd AV/OASIS" >> sftp echo "put $1 $2" >> sftp echo "get AV/OASIS/$2 $3$2" >> sftp echo "bye" >> sftp /usr/local/bin/sftp -b sftp id@domain cat /dev/null > sftp exit 0 The Java code which executes the script file is as below: String script = "full path of script"; Process p = Runtime.getRuntime().exec(script + " " + param1 + " " + param2 + " " + param3); However, I'm not sure why, but the log generated by the class file always shows the error "Host key verification failed. Connection closed." I'd isolated that line in the script which connected to the remote machine, and ran it on the local machine (where this class file and script file are stored), and the command executed successfully I'd manually run the command which the Java class file will execute and it also tested okay: $ script.sh param1 param2 param3 I'd tried to look up the error message on the Internet, and apparently it seems to have something to do with known_hosts. Could this be the reason, or is there something else I'm missing? Thanks so much!

    Read the article

  • paypal verify payment

    - by yozhik
    I am testing PayPal payments through Sandbox. So what do I do: Make a payment from my Android device, using SDK AppID: "APP-80W284485P519543T". Receive RESULT_OK in applicationResult and receive response on server side through IPN service. Now I am taking all responce from IPN and send it to paypal verification sandbox server to verify payment. It cat return (VERIFIED or INVALID). But the problem is that it is return INVALID. So whats can be the problem? What I am doing wrong? Thanks. This is what I send to verify: https://www.sandbox.paypal.com/cgi-bin/webscr?cmd=_notify-validate&payment_request_date=Mon+Mar+26+02%3A37%3A10+PDT+2012&fees_payer=EACHRECEIVER&transaction[0].is_primary_receiver=false&memo=Buy+1500+coins&transaction_type=Adaptive+Payment+PAY&verify_sign=AWimjEpfvS2eR6IgBwHtiwM0rMDUA.b2twU2ADjkAY-kg5szeluxcqKb&log_default_shipping_address_in_transaction=false&pay_key=AP-2YR77828AV574621G&transaction[0].amount=USD+1.99&reverse_all_parallel_payments_on_error=true&ipn_notification_url=http%3A%2F%2Fdev-vs.upiter.com%2Fvspayment%2Fproviders%2Fvs%2Ffb%2Fpaypalcallback&action_type=CREATE&notify_version=UNVERSIONED&transaction[0].status_for_sender_txn=Pending&test_ipn=1&cancel_url=https%3A%2F%2Fwww.paypal.com&transaction[0].pending_reason=UNILATERAL&status=COMPLETED&charset=windows-1252&transaction[0].paymentType=GOODS&request_body=&request_url=http%3A%2F%2Fdev-vs-mobile.stagika.com%2Fvspayment%2Fproviders%2Fvs%2Ffb%2Fpaypalcallback&return_url=https%3A%2F%2Fwww.paypal.com&transaction[0].receiver=a.merchant1.kv%40gmail.com&request_method=POST&transaction[0].id_for_sender_txn=0X355330VH030952T&sender.useCredentials=true

    Read the article

  • m2crypto aes-256-cbc not working against encoded openssl files.

    - by Gary
    $ echo 'this is text' > text.1 $ openssl enc -aes-256-cbc -a -k "thisisapassword" -in text.1 -out text.enc $ openssl enc -d -aes-256-cbc -a -k "thisisapassword" -in text.enc -out text.2 $ cat text.2 this is text I can do this with openssl. Now, how do I do the same in m2crypto. Documentation is lacking this. I looked at the snv test cases, still nothing there. I found one sample, http://passingcuriosity.com/2009/aes-encryption-in-python-with-m2crypto/ (changed to aes_256_cbc), and it will encrypted/descrypt it's own strings, but it cannot decrypt anything made with openssl, and anything it encrypts isn't decryptable from openssl. I need to be able enc/dec with aes-256-cbc as have many files already encrypted with this and we have many other systems in place that also handle the aes-256-cbc output just fine. We use password phrases only, with no IV. So setting the IV to \0 * 16 makes sense, but I'm not sure if this is also part of the problem. Anyone have any working samples of doing AES 256 that is compatible with m2crypto? I will also be trying some additional libraries and seeing if they work any better.

    Read the article

  • Why is curl in Ruby slower than command-line curl?

    - by Stiivi
    I am trying to download more than 1m pages (URLs ending by a sequence ID). I have implemented kind of multi-purpose download manager with configurable number of download threads and one processing thread. The downloader downloads files in batches: curl = Curl::Easy.new batch_urls.each { |url_info| curl.url = url_info[:url] curl.perform file = File.new(url_info[:file], "wb") file << curl.body_str file.close # ... some other stuff } I have tried to download 8000 pages sample. When using the code above, I get 1000 in 2 minutes. When I write all URLs into a file and do in shell: cat list | xargs curl I gen all 8000 pages in two minutes. Thing is, I need it to have it in ruby code, because there is other monitoring and processing code. I have tried: Curl::Multi - it is somehow faster, but misses 50-90% of files (does not download them and gives no reason/code) multiple threads with Curl::Easy - around the same speed as single threaded Why is reused Curl::Easy slower than subsequent command line curl calls and how can I make it faster? Or what I am doing wrong? I would prefer to fix my download manager code than to make downloading for this case in a different way. Before this, I was calling command-line wget which I provided with a file with list of URLs. Howerver, not all errors were handled, also it was not possible to specify output file for each URL separately when using URL list. Now it seems to me that the best way would be to use multiple threads with system call to 'curl' command. But why when I can use directly Curl in Ruby? Code for the download manager is here, if it might help: Download Manager (I have played with timeouts, from not-setting it to various values, it did not seem help) Any hints appreciated.

    Read the article

  • Generate a set of strings with maximum edit distance

    - by Kevin Jacobs
    Problem 1: I'd like to generate a set of n strings of fixed length m from alphabet s such that the minimum Levenshtein distance (edit distance) between any two strings is greater than some constant c. Obviously, I can use randomization methods (e.g., a genetic algorithm), but was hoping that this may be a well-studied problem in computer science or mathematics with some informative literature and an efficient algorithm or three. Problem 2: Same as above except that adjacent characters cannot repeat; the i'th character in each string may not be equal to the i+1'th character. E.g., 'CAT', 'AGA' and 'TAG' are allowed, 'GAA', 'AAT', and 'AAA' are not. Background: The basis for this problem is bioinformatic and involves designing unique DNA tags that can be attached to biologically derived DNA fragments and then sequenced using a fancy second generation sequencer. The goal is to be able to recognize each tag, allowing for random insertion, deletion, and substitution errors. The specific DNA sequencing technology has a relatively low error rate per base (~1%), but is less precise when a single base is repeated 2 or more times (motivating the additional constraints imposed in problem 2).

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • Completely bizarre Firefox CSS bug

    - by Jason
    I've been doing front end development for a long time, and I have NEVER come across a bug like this before... Save the following HTML to a file and view it in Firefox (mine is 3.6.3): <html xmlns="http://www.w3.org/1999/xhtml"> <head> <style type="text/css"> body { font-family: Helvetica, Sans-Serif;} h2 {font-weight: normal;} </style> </head> <body> <h2>Some normal text <strong>some bold text</strong> weird huh?</h2> </body> </html> If you don't want to give it a shot the output is like your cat walked across your keyboard while character map was turned on, except in the strong tags. I feel like this may be a font issue? When I get rid of font-weight: normal it goes back to normal, but I don't want everything to be bolded in my h2... Anyone have any ideas? More importantly, is anyone able to reproduce this?? Thanks.

    Read the article

  • Ruby Fileutils.cp_r Permission Denied when :preserve => true

    - by slawley
    Hello, I am trying to implement a poor-man's backup/mirroring script and am having some trouble. I am on Windows-XP, using Ruby's FileUtils module to recursively copy files. So long as I don't set the :preserve flag to true, everything works fine. Works: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true) Doesn't work: FileUtils.cp_r('Source_dir', 'Dest_dir', :verbose => true, :preserve => true) I have full permissions on the Dest_dir as it's on the desktop of my local machine and I just created it. I can copy and delete files and folders, but apparently changing, or maintaining the file attributes with :preserve isn't working. I haven't had a chance to try this on a Mac or linux box, but from reading around online the :preserve flag is a normal stumbling block to come up against in a Windows environment. In a similar line of questioning, what is the default behavior for FileUtils.cp_r when it encounters an existing file at the destination directory? Simply overwrite and replace everything in Destination with whatever is in Source, or can I skip a file with conflicts and just log it for resolution later? (If this should be a separate question, just let me know and I'll make it one.) Thanks, Spencer

    Read the article

  • What are the benefits of the PHP the different PHP compression libraries?

    - by Christopher W. Allen-Poole
    I've been looking into ways to compress PHP libraries, and I've found several libraries which might be useful, but I really don't know much about them. I've specifically been reading about bcompiler and PHAR libraries. Is there any performance benefit in either of these? Are there any "gotchas" I need to watch out for? What are the relative benefits? Do either of them add to/detract from performance? I'm also interested in learning of other libs which might be out there which are not obvious in the documentation? As an aside, does anyone happen to know whether these work more like zip files which just happen to have the code in there, or if they operate more like Python's pre-compiling which actually runs a pseudo-compiler? ======================= EDIT ======================= I've been asked, "What are you trying to accomplish?" Well, I suppose the answer is that this is all hypothetical. It is a combination of these: What if my pet project becomes the most popular web project on earth and I want to distribute it quickly and easily? (hay, a man can dream, right?) It also seems if using PHAR can be done easily, it would be the best way to create a subversion snapshot. Python has this really cool pre-compiling policy, I wonder if PHP has something like that? These libraries seem to do something similar. Will they do that? Hey, these libraries seem pretty neat, but I'd like clarification on the differences as they seem to do the same thing

    Read the article

  • Cross-domain templating with Javascript

    - by Husky
    I'm currently building a Javascript library that can be used to easily create embeddable media based on the URL of a media file, and then be controlled using Javascript methods and events (think something like the Flash / Silverlight JW player). Of course, i could simply cat all the html tags from the Javascript library and send that to the browser: function player(url) { document.write('<object type="foo"><param name="something" value="bar">' + <param name="source" value=" + url + '/></object>'); } But i think this is a very ugly practice that tends to create unmanageable code that is unreadable when you review it a few weeks later. So, a templating solution seems to be the way to go. I've been looking to EJS because it loads the templates using AJAX, so you can manage your templates in separate file instead of directly on the HTML page. There's one 'gotcha' with that: my library needs to be completely cross-domain: the library itself could be located on foo.com while the serving site could be located on bar.com. So if bar.com would want to add a media player using the library it needs to do an AJAX call to a template located on foo.com, which won't work because of the same-origin policy in browsers. AFAIK, there's no library that uses something like JSONP to read and write templates to get around this problem. Could anyone point me to a solution for this problem?

    Read the article

  • How do I so a select input for a STI column in a Rails model?

    - by James A. Rosen
    I have a model with single-table inheritance on the type column: class Pet < ActiveRecord::Base TYPES = [Dog, Cat, Hamster] validates_presence_of :name end I want to offer a <select> dropdown on the new and edit pages: <% form_for @model do |f| %> <%= f.label :name %> <%= f.text_input :name %> <%= f.label :type %> <%= f.select :type, Pet::TYPES.map { |t| [t.human_name, t.to_s] } %> <% end %> That gives me the following error: ActionView::TemplateError (wrong argument type String (expected Module)) I read a suggestion to use an alias for the field #type since Ruby considers that a reserved word that's the same as #class. I tried both class Pet < ActiveRecord::Base ... alias_attribute :klass, :type end and class Pet < ActiveRecord::Base ... def klass self.type end def klass=(k) self.type = k end end Neither worked. Any suggestions? Oddly, it works fine on my machine (MRI 1.8.6 on RVM), but fails on the staging server (MRI 1.8.7 not on RVM).

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >