Search Results

Search found 26179 results on 1048 pages for 'linux from scratch'.

Page 510/1048 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • HOW-TO Make computer sing

    - by Ofir
    Hi, I'm trying to develop an online application where the user writes some text and the software sings it back to the user. I can currently generate the audio file with the words spoken by the computer using espeak, but I have no idea how to make it sound like a song, how to add rhythm to it. I'm able to change the pitch and tempo using rubberband, but that's as far as I've gotten. Does anyone have a clue how to make this happen?

    Read the article

  • How do I test if a variable is a number in bash?

    - by Flávio Amieiro
    I just can't figure out how do I make sure an argument passed to my script is a number or not. All I want to do is something like this: test *isnumber* $1 && VAR=$1 || echo "need a number" Any help? UPDATE: I managed (whit Charles' help) to do it, but I'm not yet sure it's the best way to do that (even though it worked on my tests). This is how it ended up: [[ $1 =~ "^[0-9]+$" ]] && echo "numero" && exit 0 || echo "nao numero" && exit 1

    Read the article

  • bash command history update before execution of command

    - by Jon
    Hi, Bash's command history is great, especially it is useful when adding the history -a command to the COMMAND_PROMPT. However, I'm wondering if there is a way to log the commands to a file as soon as the Return key is pressed, e.g. before starting the command and not on completion of the command (using the COMMAND_PROMPT option would save the command once the prompt is there again). I read about auditing programs like snoopy and session recorder like script but I thought they're already too complex for the simple question I have. I guess that deactivating that script logs all the output of the command would lead already in the right direction but isn't there a quicker way to solve that probelm? Thanks, Jon

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • pthread and recursively calling execvp in C

    - by eduke
    To begin I'm sorry for my english :) I looking for a way to create a thread each time my program finds a directory, in order to call the program itself but with a new argv[2] argument (which is the current dir). I did it successfully with fork() but with pthread I've some difficulties. I don't know if I can do something like that : #include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <dirent.h> int main(int argc, char **argv) { pthread_t threadID[10] = {0}; DIR * dir; struct dirent * entry; struct stat status; pthread_attr_t attr; pthread_attr_init(&attr); int i = 0; char *res; char *tmp; char *file; if(argc != 3) { printf("Usage : %s <file> <dir>\n", argv[0]); exit(EXIT_FAILURE); } if(stat(argv[2],&status) == 0) { dir = opendir(argv[2]); file = argv[1]; } else exit(EXIT_FAILURE); while ((entry = readdir(dir))) { if (strcmp(entry->d_name, ".") && strcmp(entry->d_name, "..")) { tmp = malloc(strlen(argv[2]) + strlen(entry->d_name) + 2); strcpy(tmp, argv[2]); strcat(tmp, "/"); strcat(tmp, entry->d_name); stat(tmp, &status); if (S_ISDIR(status.st_mode)) { argv[2] = tmp; pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); printf("New thread created : %d", i); i++; } else if (!strcmp(entry->d_name, file)) { printf(" %s was found - Thread number = %d\n",tmp, i); break; } free(tmp); } } pthread_join( threadID[i] , &res ); exit(EXIT_SUCCESS); } Actually it doesn't works : pthread_create( &threadID[i], &attr, execvp(argv[0], argv), NULL); I have no runtime error, but when the file to find is in another directory, the thread is not created and so execvp(argv[0], argv) is not called... Thank you for you help, Simon

    Read the article

  • error loading (TypeLoadException) on asp.net/xsp/mono on debian/opensuse

    - by acidzombie24
    When i reset apache and load my website i get the first error below. I have no idea what the problem is. If i reload the page again (without restarting apache) i get the 2nd error, probably because the first error occurred and BaseUser is the first class/func that Application_Start uses. Why am i getting this load exception? Whats messed up is i tried using mono's VMWare img to debug it and i got the very same exception (until i restarted which now refuses to give me anything but 404 errors). However when i use mono develop to run the project the site runs PERFECT. WTF. Any ideas? Server Error in '/' Application A type load exception has occurred. Description: HTTP 500. Error processing request. Stack Trace: System.TypeLoadException: A type load exception has occurred. at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod*,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 Version information: Mono Runtime Version: 2.8.1 (tarball Mon Dec 27 10:20:03 UTC 2010); ASP.NET Version: 2.0.50727.1433 Second: Server Error in '/' Application Could not load type 'mynamespace.BaseUser' from assembly 'mynamespace, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. Description: HTTP 500. Error processing request. Stack Trace: System.TypeLoadException: Could not load type 'mynamespace.BaseUser' from assembly 'mynamespace, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. at mynamespace.Global.Application_Start (System.Object sender, System.EventArgs e) [0x00000] in <filename unknown>:0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod*,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 Version information: Mono Runtime Version: 2.8.1 (tarball Mon Dec 27 10:20:03 UTC 2010); ASP.NET Version: 2.0.50727.1433 -edit- i'll mention that i tried MonoDevelops build of my site on both opensuse and my website and i get the exact same problem.

    Read the article

  • Displaying Video4Linux frames with LessTif

    - by Dan
    Hi everyone, I have a simple test application (in C) that grabs mmaped frames from my v4l device. And now, I'd like to display these frames within a tiny LessTif application (like gnome cheese, but only displaying the frames - nothing else). Do you have an idea how to implement such a LessTif program? Thanks, Dan

    Read the article

  • Too many open files in one of my java routine.

    - by Irfan Zulfiqar
    I have a multithreaded code that has to generated a set of objects and write them to a file. When I run it I sometime get "Too many open files" message in Exception. I have checked the code to make sure that all the file streams are being closed properly. Here is the stack trace. When I do ulimit -a, open files allowed is set to 1024. We think increasing this number is not a viable option / solution. [java] java.io.FileNotFoundException: /export/event_1_0.dtd (Too many open files) [java] at java.io.FileInputStream.open(Native Method) [java] at java.io.FileInputStream.<init>(FileInputStream.java:106) [java] at java.io.FileInputStream.<init>(FileInputStream.java:66) [java] at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:70) [java] at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:161) [java] at java.net.URL.openStream(URL.java:1010) Now what we have identified so far by looking closely at the list of open files is that the VM is opening same class file multiple times. /export/BaseEvent.class 236 /export/EventType1BaseEvent.class 60 /export/EventType2BaseEvent.class 48 /export/EventType2.class 30 /export/EventType1.class 14 Where BaseEvent is partent of all the classes and EventType1 ant EventType2 inherits EventType1BaseEvent and EventType2BaseEvent respectively. Why would a class loader load the same class file 200+ times. It seems it is opening up the base class as many time it create any child instance. Is this normal? Can it be handler any other way apart from increasing the number of open files?

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • How to include all objects of an archive in a shared object?

    - by Didier Trosset
    When compiling our project, we create several archives (static libraries), say liby.a and libz.a that each contains an object file defining a function y_function() and z_function(). Then, these archives are joined in a shared object, say libyz.so, that is one of our main distributable target. g++ -fPIC -c -o y.o y.cpp ar cr liby.a y.o g++ -fPIC -c -o z.o z.cpp ar cr libz.a z.o g++ -shared -L. -ly -lz -o libyz.so When using this shared object into the example program, say x.c, the link fails because of an undefined references to functions y_function() and z_function(). g++ x.o -L. -lyz -o xyz It works however when I link the final executable directly with the archives (static libraries). g++ x.o -L. -ly -lz -o xyz My guess is that the object files contained in the archives are not linked into the shared library because they are not used in it. How to force inclusion? Edit: Inclusion can be forced using --whole-archive ld option. But if results in compilation errors: g++ -shared '-Wl,--whole-archive' -L. -ly -lz -o libyz.so /usr/lib/libc_nonshared.a(elf-init.oS): In function `__libc_csu_init': (.text+0x1d): undefined reference to `__init_array_end' /usr/bin/ld: /usr/lib/libc_nonshared.a(elf-init.oS): relocation R_X86_64_PC32 against undefined hidden symbol `__init_array_end' can not be used when making a shared object /usr/bin/ld: final link failed: Bad value Any idea where this comes from?

    Read the article

  • Perl: Edit hyperlinks in nested tags that aren't on separate lines

    - by user305801
    I have an interesting problem. I wrote the following perl script to recursively loop through a directory and in all html files for img/script/a tags do the following: Convert the entire url to lowercase Replace spaces and %20 with underscores The script works great except when an image tag in wrapped with an anchor tag. Is there a way to modify the current script to also be able to manipulate the links for nested tags that are not on separate lines? Basically if I have <a href="..."><img src="..."></a> the script will only change the link in the anchor tag but skip the img tag. #!/usr/bin/perl use File::Find; $input="/var/www/tecnew/"; sub process { if (-T and m/.+\.(htm|html)/i) { #print "htm/html: $_\n"; open(FILE,"+<$_") or die "couldn't open file $!\n"; $out = ''; while(<FILE>) { $cur_line = $_; if($cur_line =~ m/<a.*>/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* href=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } if($cur_line =~ m/(<img.*>|<script.*>)/i) { print "cur_line (unaltered) $cur_line\n"; $cur_line =~ /(^.* src=\")(.+?)(\".*$)/i; $beg = $1; $link = html_clean($2); $end = $3; $cur_line = $beg.$link.$end; print "cur_line (altered) $cur_line\n"; } $out .= $cur_line; } seek(FILE, 0, 0) or die "can't seek to start of file: $!"; print FILE $out or die "can't print to file: $1"; truncate(FILE, tell(FILE)) or die "can't truncate file: $!"; close(FILE) or die "can't close file: $!"; } } find(\&process, $input); sub html_clean { my($input_string) = @_; $input_string = lc($input_string); $input_string =~ s/%20|\s/_/g; return $input_string; }

    Read the article

  • ssh-keygen accepting stdin

    - by Ryan
    I am trying to call ssh-keygen using a variable through bash as an input instead of a file to get a fingerprint of a public key. This method does not work as it says the key file is invalid (it's correct for sure) echo $pubkey | ssh-keygen -lf /dev/stdin This does work ssh-keygen -lf /dev/stdin < alpha.pub This does not work because I get an ambiguous redirect ssh-keygen -lf /dev/stdin < $(echo $pubkey) I would appreciate some insight as to how to get ssh-keygen to read from a variable with a public key and if possible, an explanation as to why the redirects aren't doing what I think they should be doing. I searched online but many of the redirect tutorials didn't seem to answer my questions.

    Read the article

  • Intermittent "No Database Selected" in PHP/MySQL?

    - by ANE
    Have a PHP/MySQL form with a dropdown box containing a list of 350 names. When any random name is selected, sometimes it works & displays info about that name from the database, and sometimes the form gives the error "No Database Selected". Here's what I've tried, pretty much grasping at straws as I'm not a programmer: Increasing max_connections in /etc/my.cnf from 200 to 2000 (even though only 4-5 connections are made and it's a lightly used server) Changing mysql_pconnect to mysql_connect Adding the word true to this connection string: $mysql = mysql_pconnect($hostname_mysql, $username_mysql, $password_mysql, true) or trigger_error(mysql_error(),E_USER_ERROR); Changing the word require_once to require on this line: [?php require('/home/user/Connections/mysql.php'); ?] Enabling MySQL & PHP query & error logging. (no errors logged) Here is the code: [removed old bad code] Update: Working answer from Rob Apodaca below.

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • Distributing a bundle of files across an extranet

    - by John Zwinck
    I want to be able to distribute bundles of files, about 500 MB per bundle, to all machines on a corporate "extranet" (which is basically a few LANs connected using various private mechanisms, including leased lines and VPN). The total number of hosts is roughly 100, and the goal is to get a copy of the bundle from one host onto all the other hosts reliably, quickly, and efficiently. One important issue is that some hosts are grouped together on single fast LANs in which case the network I/O should be done once from one group to the next and then within each group between all the peers. This is as opposed to a strict central server system where multiple hosts might each fetch the same bundle over a slow link, rather than once via the slow link and then between each other quickly. A new bundle will be produced every few days, and occasionally old bundles will be deleted (but that problem can be solved separately). The machines in question happen to run recent Linuxes, but bonus points will go to solutions which are at least somewhat cross-platform (in which case the bundle might differ per platform but maybe the same mechanism can be used). That's pretty much it. I'm not opposed to writing some code to handle this, but it would be preferable if it were one of bash, Python, Ruby, Lua, C, or C++.

    Read the article

  • PHP Extension using libtidy compiles, but does not load

    - by ewokker
    I wrote an extension in C++ that uses libtidy, and it runs perfectly under PHP when I compile PHP --with-tidy. However, it would be nice to have the extension run on a vanilla PHP. When I try to use the extension, I get something like: PHP Warning: PHP Startup: Unable to load dynamic library 'extension.so': undefined symbol: tidyCleanAndRepair in Unknown on line 0 and the extension is not loaded. Obviously, the official tidy extension works fine. I have the relevant libtidy development packages installed on the system, and it compiles+links without a problem. I have tried to look through the code for the tidy extension, but it is a huge mass of macros - copying pieces at random felt like cargo code. Besides linking to the library with PHP_ADD_LIBRARY_WITH_PATH(tidy, $TIDY_LIBDIR, TIDY_SHARED_LIBADD), Is there a PHP extension or C statement that fixes this error? Thanks in advance!!

    Read the article

  • Does O2Micro Oz776 Smartcard reader support SLE5528 cards?

    - by Proton
    Well, the title seems indicating I'm a lazy guy but it's not the truth... I'm messing with this weird circumstance a whole day. My laptop is a Dell Latitude D630 which has a Oz776 (usb idVendor == 0x0b97, idProduct == 0x7772) smartcard reader, but I'm not sure if it is functioning well. It can successfully recognize my Gemplus GPK4000 smartcard and my SIM card, but not SLE5528. This is my pcscd log when insert the GPK4000: 06039271 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000100 - 000000 62 00 00 00 00 00 14 01 00 00 00967744 <- 000000 80 0A 00 00 00 00 14 00 00 00 3B 27 00 80 65 A2 0C 01 01 37 00000048 ATR: 3B 27 00 80 65 A2 0C 01 01 37 00000013 atrhandler.c:102:ATRDecodeAtr() Conv: 01, Y1: 02, K: 07 00000011 atrhandler.c:120:ATRDecodeAtr() TA1: FFFFFFFF, TB1: 00, TC1: FFFFFFFF, TD1:FFFFFFFF 00000011 atrhandler.c:248:ATRDecodeAtr() CurrentProtocol: 1, AvailableProtocols: 1 00000062 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000014 Card ATR: 3B 27 00 80 65 A2 0C 01 01 37 29016873 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 This is the log when insert a SLE5528 card: 99999999 ifdhandler.c:1043:IFDHPowerICC() action: PowerUp, usb:0b97/7772:libhal:/org/freedesktop/Hal/devices/usb_device_b97_7772_noserial_if0 (lun: 0) 00000048 - 000000 62 00 00 00 00 00 11 01 00 00 ** Then it chokes here, when I remove the card, the log continues ** 04741980 <- 000000 80 00 00 00 00 00 11 42 FE 00 00000044 commands.c:225:CmdPowerOn Card absent or mute 00000017 ifdhandler.c:1096:IFDHPowerICC() PowerUp failed 00000082 eventhandler.c:429:EHStatusHandlerThread() Card inserted into O2 Micro Oz776 00 00 00000021 eventhandler.c:443:EHStatusHandlerThread() Error powering up card. 00402818 eventhandler.c:361:EHStatusHandlerThread() Card Removed From O2 Micro Oz776 00 00 I found that SLE5528 is ISO7816 compatible, and it should have ATR, but it just chokes at the PowerUp. When inserted any PVC card with no chip or AT24C01 card, it would not choke but report immediate PowerUp failure. When I tried it on Windows(Windows 7, "runas other user", smartcard login), it chokes too while PVC cards and AT24C01 report immediate failure.

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >