Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 529/985 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Low-overhead way to access the memory space of a traced process?

    - by vovick
    Hello all. I'm looking for an efficient way to access(for both read and write operations) the memory space of my ptraced child process. The size of blocks being accessed may vary from several bytes up to several megabytes in size, so using the ptrace call with PTRACE_PEEKDATA and PTRACE_POKEDATA which read only one word at a time and switch context every time they're called seems like a pointless waste of resources. The only one alternative solution I could find, though, was the /proc/<pid>/mem file, but it has long since been made read only. Is there any other (relatively simple) way to do that job? The ideal solution would be to somehow share the address space of my child process with its parent and then use the simple memcpy call to copy data I need in both directions, but I have no clues how to do it and where to begin. Any ideas?

    Read the article

  • Strange mod_rewrite problem; Website works partially

    - by Camran
    I have Ubuntu 9.10 Server... I need to get mod_rewrite working... the mod_rewrite module IS LOADED. On my server, the httpd.conf is empty, instead everything (almost) is in a file called apache2.conf. Anyways, I have also read I have to change the AllowOverride None to AllowOverride All in some file... My httpd.conf is empty as you know, but I have a folder called sites-enabled which contains a 000-default file. This is where I have set: AllowOverride All Now my goal as I stated in the last Q is to turn this link: http://mydomain.com/ad.php?ad_id=Bmw_nice_M3_497379462 into this: http://mydomain.com/Bmw_nice_M3_497379462 So as I got an answer in the last Q i inserted this into the htaccess file: Options +FollowSymLinks Options +Indexes RewriteEngine On RewriteCond %{REQUEST_URI} !^/ad\.php RewriteRule ^(.*)$ ad.php?ad_id=$1 [L] Now, this works (no fully) when entering the url manually in the adress bar, but my website isn't working now for some reason. It is like the website is locked down or something, and unless I change AllowOverride to None it will act like that. Any ideas why? Also another note, the links inside the rewritten url doesn't work properly (images are not shown, while some are shown)...

    Read the article

  • error : [0.8879153] kernel panic -not syncing VFS unable to mount fs unknow block (8.3)

    - by Fiasco
    hello i installed ubuntu using wubit inside the windows and started working on it then i got this error afer updating [0.8879153] kernel panic -not syncing VFS unable to mount fs unknow block (8.3) and i can't user rescue mode and it's give me another error no filesystem could mount root ..... i looked at grub folder and didn' find any file on disks/boot/grub/ so i tryed to user super grub to fix it but it didn' work and it keep giving me. any idea plz .

    Read the article

  • Same memory space being allocated again & again

    - by shadyabhi
    In each loop iteration, variable j is declared again and again. Then why is its address remaining same? Shouldn't it be given some random address each time? Is this compiler dependent? #include<stdio.h> #include<malloc.h> int main() { int i=3; while (i--) { int j; printf("%p\n", &j); } return 0; } Testrun:- shadyabhi@shadyabhi-desktop:~/c$ gcc test.c shadyabhi@shadyabhi-desktop:~/c$ ./a.out 0x7fffc0b8e138 0x7fffc0b8e138 0x7fffc0b8e138 shadyabhi@shadyabhi-desktop:~/c$

    Read the article

  • Is git revert broken?

    - by sabgenton
    The following pastebin is a repo with one file with one, two, three, four, five typed on each line. Each line was commited separately into git: http://pastebin.ca/raw/2136179 I then tried to delete the line two with the command git revert <commmit which creates two> And get: error: could not revert b4e0a66... second hint: after resolving the conflicts, mark the corrected paths hint: with 'git add <paths>' or 'git rm <paths>' hint: and commit the result with 'git commit' There should be no conflict for something this simple? Or am I doing it wrong/got the wrong command? The merge details don't seem to make sense either: one <<<<<<< HEAD two three four five ======= >>>>>>> parent of b4e0a66... second Isn't that saying delete everything but one? I was expecting only two to be affected... git 1.7.10

    Read the article

  • remove the content in directory and subdirectory hierarichally with out distroy the directory structure

    - by user3713876
    In shell script, I want to clear only text files and log files in the following structure with out removing the directory as well as subdirectories | |------bar/ | |---file1.txt |---file2.txt | |---subdir1/ | |---file1.log | |---file2.log | |---subdir2/ |---image1.log |---image2.log I am using rm -rf /bar/* so I am getting the result as follows. |------bar/ but I want the output like following | |------bar/ | | | | |---subdir1/ | | | |---subdir2/ I want to remove only text files or log files or csv with out removing the directory and the subdirectories

    Read the article

  • Memory Regions displayed in SMAPS output with no permissions

    - by crissangel
    If I see the output of cat /proc//smaps, I find that there are some memory regions with which no read/write/execute permissions have been associated. Also these region are mapped to inode number 0. I wanted to know how does a region end up in such a state? Is it some sort of memory leak? Can these regions be ever used again by the process?

    Read the article

  • What is the problem with this code? How to solve it? (fork)

    - by sb2367
    What is the problem with this code? How to solve it? Parent processes goto in if or child process? first code produce zombie process or second code or both or non ? #include <signal.h> #include <sys/wait.h> main() { for (;;) { if (!fork()) { exit(0); } sleep(1); } } what about this code : #include <signal.h> #include <sys/wait.h> main() { for (;;) { if (fork()) { exit(0); } sleep(1); } }

    Read the article

  • Will Sytem.currentTimeMillis always return a value >= previous calls?

    - by 1984isnotamanual
    http://java.sun.com/j2se/1.4.2/docs/api/java/lang/System.html#currentTimeMillis() says: Returns the current time in milliseconds. Note that while the unit of time of the return value is a millisecond, the granularity of the value depends on the underlying operating system and may be larger. For example, many operating systems measure time in units of tens of milliseconds. It is not clear to me if I am guaranteed that this code will always print ever increasing (or the same) numbers. while (1) { System.out.println(System.currentTimeMillis() ); }

    Read the article

  • How do I write a bash script to replace words in files and then rename files?

    - by Jason
    Hi All, I have a folder structure, as shown below: I need to create a bash script that does 4 things: It searches all the files in the generic directory and finds the string 'generic' and makes it into 'something' As above, but changes "GENERIC" to "SOMETHING" As above, but changes "Generic" to "Something" Renames any filename that has "generic" in it with "something" Right now I am doing this process manually by using the search and replace in net beans. I dont know much about bash scripting, but i'm sure this can be done. I'm thinking of something that I would run and it would take "Something" as the input. Where would I start? what functions should I use? overall guidance would be great. thanks. I am using Ubuntu 10.5 desktop edition.

    Read the article

  • Program using read() entering into an infinite loop

    - by Soham
    1oid ReadBinary(char *infile,HXmap* AssetMap) { int fd; size_t bytes_read, bytes_expected = 100000000*sizeof(char); char *data; if ((fd = open(infile,O_RDONLY)) < 0) err(EX_NOINPUT, "%s", infile); if ((data = malloc(bytes_expected)) == NULL) err(EX_OSERR, "data malloc"); bytes_read = read(fd, data, bytes_expected); if (bytes_read != bytes_expected) printf("Read only %d of %d bytes %d\n", \ bytes_read, bytes_expected,EX_DATAERR); /* ... operate on data ... */ printf("\n"); int i=0; int counter=0; char ch=data[0]; char message[512]; Message* newMessage; while(i!=bytes_read) { while(ch!='\n') { message[counter]=ch; i++; counter++; ch =data[i]; } message[counter]='\n'; message[counter+1]='\0'; //--------------------------------------------------- newMessage = (Message*)parser(message); MessageProcess(newMessage,AssetMap); //-------------------------------------------------- //printf("idNUM %e\n",newMessage->idNum); free(newMessage); i++; counter=0; ch =data[i]; } free(data); } Here, I have allocated 100MB of data with malloc, and passed a file big enough(not 500MB) size of 926KB about. When I pass small files, it reads and exits like a charm, but when I pass a big enough file, the program executes till some point after which it just hangs. I suspect it either entered an infinite loop, or there is memory leak. EDIT For better understanding I stripped away all unnecessary function calls, and checked what happens, when given a large file as input. I have attached the modified code void ReadBinary(char *infile,HXmap* AssetMap) { int fd; size_t bytes_read, bytes_expected = 500000000*sizeof(char); char *data; if ((fd = open(infile,O_RDONLY)) < 0) err(EX_NOINPUT, "%s", infile); if ((data = malloc(bytes_expected)) == NULL) err(EX_OSERR, "data malloc"); bytes_read = read(fd, data, bytes_expected); if (bytes_read != bytes_expected) printf("Read only %d of %d bytes %d\n", \ bytes_read, bytes_expected,EX_DATAERR); /* ... operate on data ... */ printf("\n"); int i=0; int counter=0; char ch=data[0]; char message[512]; while(i<=bytes_read) { while(ch!='\n') { message[counter]=ch; i++; counter++; ch =data[i]; } message[counter]='\n'; message[counter+1]='\0'; i++; printf("idNUM \n"); counter=0; ch =data[i]; } free(data); } What looks like is, it prints a whole lot of idNUM's and then poof segmentation fault I think this is an interesting behaviour, and to me it looks like there is some problem with memory FURTHER EDIT I changed back the i!=bytes_read it gives no segmentation fault. When I check for i<=bytes_read it blows past the limits in the innerloop.(courtesy gdb)

    Read the article

  • How to fill a structure when a pointer to it, is passed as an argument to a function

    - by Ram
    I have a function: func (struct passwd* pw) { struct passwd* temp; struct passwd* save; temp = getpwnam("someuser"); /* since getpwnam returns a pointer to a static * data buffer, I am copying the returned struct * to a local struct. */ if(temp) { save = malloc(sizeof *save); if (save) { memcpy(save, temp, sizeof(struct passwd)); /* Here, I have to update passed pw* with this save struct. */ *pw = *save; /* (~ memcpy) */ } } } The function which calls func(pw) is able to get the updated information. But is it fine to use it as above. The statement *pw = *save is not a deep copy. I do not want to copy each and every member of structure one by one like pw-pw_shell = strdup(save-pw_shell) etc. Is there any better way to do it? Thanks.

    Read the article

  • What scripts should not be ported from bash to python?

    - by Jack
    I decided to rewrite all our Bash scripts in Python (there are not so many of them) as my first Python project. The reason for it is that although being quite fluent in Bash I feel it's somewhat archaic language and since our system is in the first stages of its developments I think switching to Python now will be the right thing to do. Are there scripts that should always be written in Bash? For example, we have an init.d daemon script - is it OK to use Python for it? We run CentOS. Thanks.

    Read the article

  • Getting rails to execute root level file edits on system files without compromising security.

    - by voxobscuro
    I'm writing a Rails 3 application that needs to be able to trigger modifications to unix system config files. I'd like to insulate the file modifications from the consumer side by running them in a background process. I've considered writing out a temp file in rails and then copying the file with a bash script but that doesn't really insulate the system. I've also considered pulling from the database manually with a cron based script and updating the configs. But what I would really like is a component that can hook into the rails environment, read out what is needed from the database, and update the config files. This process needs to be run as root because the config files mostly live in /etc/whatever. Any suggestions? Thanks!

    Read the article

  • Where is the root [closed]

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My question is: Where is this "big tree" located?

    Read the article

  • Great GUI for Apache2?

    - by ajsie
    I wonder if there are great GUI management tools for Apache so you dont have to manually edit files in VIM. It would be great if you could manage Apache over internet. Any suggestions of such tools?

    Read the article

  • Is there a list of programs for yum

    - by scriptingalias
    Basically I would like to know if there's is an actual web page that can be searched for the programs available under yum. I have yumex and I've tried using it but its super slow to search (sometimes it takes 5 minutes) and I would like some webpage or other method of doing a search. thanks,

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >