Search Results

Search found 32568 results on 1303 pages for 'linux pwns mac'.

Page 494/1303 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • Why mutt terminates with segmentation error?

    - by hugemeow
    I pressed $, in order to sync mailbox, but mutt just quit... in fact mutt don't quit every time i press $, it only quits sometimes, so how to know the reason why mutt quit? is this a bug in mutt'? The error message is: Sorting mailbox... Segmentation fault Can i use strace with mutt if i want to know what happens? Or are there tools which are better to find out more about the problem? right now i replied to a mail, then i press $, then segmentation fault...

    Read the article

  • RTL8168B/8111B Lan card is not detected in RHEL5.1..Not finding Lan card driver for this particular

    - by Deepak Narwal
    Hello friends... In My computer Lan card model is Realtek RTL8168B/8111B PCI-E GIGABIT ETHERNET NIC (NDIS 6.20) My system is dual boot windows 7 and redhat 5.1 Now windows 7 automatically detected this lan card but in redhat lan card is not detected.I have tried to through evrywhere like network or through neat-tui but it is not showing lan card.. I tried google also but all of them providing windows software for this lan card . So please anyone can tell me the link so that i can download drivers for this and can use internet there.. Thanks a lot in advance Deepak Narwal

    Read the article

  • idle proccesses and high memory bad? uwsgi/django

    - by JimJimThe3rd
    I have a VPS with 256MB of ram. I'm running nginx, uwsgi and postgresql on Ubuntu 12.04 for a soon to be Django site. About 200MB of ram are being used despite the website not being active, the uwsgi processes seem to just be idling. Is this bad? I once heard that having a bunch of free memory isn't necessarily a good metric because it is possible that the memory in use can easily be freed up. I mean, it is possible that the server is storing commonly used "stuff" in case it is accessed but is more than happy to dump it if the ram is needed. But I'm really not sure, hence me asking this question. If it is bad I could set some of the application loading options for uwsgi like "cheap" or "idle" mode. Screenshot of my htop

    Read the article

  • Video desktop recording and multiple WM displays, capturing nonactive display

    - by okobaka
    Two WM running on one local machine. WM - Fluxbox. Using ffmpeg to record desktop. ffmpeg -an -f x11grab -s 1920x1080 -r 25 -i :1.0 -sameq /tmp/video.mkv On one display everything works great, but not when i have another WM display startx -- :1. What i am doing right now is to switch ctrl+alt+f8 to display:1.0, and start recording with ffmpeg. Everything is fine until i switch back ctrl+alt+f7 to display:0.0, WM and captured video image freezes, but when i switch back ctrl+alt+f8 to display:1.0, it unfreeze and continue recording. So, how to make display:1.0 not to freeze, while on display:0.0? Tested some more. open [display 0.0] open [display 0.1] from [display 0.0] = open => [display 0.2] same problem For different users and same users results are the same. ffmpeg keeps recording that paused image. Looks like WM root window need to be active, to be recorded.

    Read the article

  • Empty /var/log after running cron bash script

    - by Ortix92
    I wrote a little bash script and all of a sudden my /var/log folder is completely empty except for the log I created for the bash script. This is the script I'm running every hour with cron: #!/bin/bash STL_DIR=/path/to/some/folder/i/hid LOGFILE=/var/log/stl_upload.log now=`date` echo "----------Start of Transmission----------" 2>&1 | tee -a $LOGFILE echo "Starting transfer at $now" 2>&1 | tee -a $LOGFILE rsync -av -e ssh $STL_DIR [email protected]:/users/path/folder 2>&1 | tee -a $LOGFILE echo "----------End of transmission----------" 2>&1 | tee -a $LOGFILE printf "\n" 2>&1 | tee -a $LOGFILE I want to be clear that I'm not 100% certain this is related to the empty logs folder. So if anyone could give me a pointer as to what could be going on about the reason why my log folder is empty, that'd be great.

    Read the article

  • when to use squid on server side?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • Journaled filesystems and power failure

    - by Yoga
    I heard that even a journaled filesystems such as EXT3/EXT4 might corrupted during power failure, e.g. from wikipedia [1]: In the event of a system crash or power failure, such file systems are quicker to bring back online and less likely to become corrupted. Can anyone provide more detail by giving examples such that when corruption can occur corruption is avoided by journaled filesystems [1] http://en.wikipedia.org/wiki/Journaling_file_system

    Read the article

  • Cygwin unable to compile

    - by christine
    I just downloaded Cygwin, I've never used it before cause I've always used putty. Cygwin is not letting me compile; I can see the files but it just doesn't let me compile and I do not understand why, am I doing something wrong? This is what's going on: Christine@Christine-PC ~ $ ls 8.6.c a.b.c a.c.c core new 2.txt test.c 9.13.c a.c a.out days.c new2.c test.txt Christine@Christine-PC ~ $ gcc a.c.c -bash: gcc: command not found

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • nginx: php-fastcgi running but php files not executing

    - by Daniel
    I have recently set up a nginx server with PHP running as FastCGI process. The server is running with HTML files however PHP files are downloading instead of displaying and PHP code is not processed. This is what I have in nginx.conf: server { listen 80; server_name pubserver; location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } } The command netstat -tulpn | grep :9000 displays the following which indicates php-fastcgi is running and listening on port 9000: tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 2663/php-cgi If it's if any importance my server is running on CentOS 6 and I installed nginx and PHP using the repositories from The Fedora Project.

    Read the article

  • External routing for local interfaces in a virtualized network

    - by Arkaitz Jimenez
    Current setup: br0| |-- tun10 -pipe-tun0(192.240.240.1) |-- tun11 -pipe-tun1(192.240.240.2) |-- tun12 -pipe-tun2(192.240.240.3) The pipe program is a custom program that forwards data back2back between two tun interfaces. The idea is puting 2 programs in .2 and .3 while keeping .1 as the local interface in the current machine. The main problem is that I want to route packets to .2 and to .3 through .1 and br0, but as they are local interfaces, the kernel ignores any routing instruction, it just delivers the packet to the proper interface. Tried iptables, but the nat table doesn't even see ping packets to those ifaces. A "ping 192.240.240.2" delivers a icmp packet with source and dest .2 to tun1, ideally it should deliver a source .1 dest .2 at tun1 through tun0-br0-tun1 Any hint? Here the output of some commands: Output

    Read the article

  • Disable MathML output of eLyXer

    - by Gryllida
    eLyXer is a standalone LyX to HTML converter. In the resulting file, equations are formatted as MathML, and the file itself starts with an XML tag. This causes two problems: LibreOffice does not read the XML file (it can read HTML files, but not XHTML). I am unable to copy and paste the equations into a document editor such as LibreOffice with the goal of subsequent conversion into .doc, because .doc files do not support MathML. The eLyXer help page mentions an option to only use simple math, but there is no option to set math equations to output as images. And I already set Document Settings Output Math equations Format: images in LyX, which presumably is saved in the lyx document somewhere. A web search did not come up with any solutions.

    Read the article

  • Post compiled php 5.4 curl installation

    - by user140657
    I recently compiled php 5.4 from source. I have Centos 6. I used this configuration: # ./configure --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql # make # make install # cp php.ini-dist /usr/local/lib/php.ini I realize now that I do not have cURL installed. I don't know how to install cURL after a compiled installation of php. Using yum install php-curl installs cURL for php 5.3. I tried this already with an apache restart and it did not show up on my phpinfo file. How do I install cURL under these circumstances?

    Read the article

  • How to prevent unison syncronize file when file process uploading

    - by user134600
    I use CentOS 5.8 Final. My situation is I running unison with cron where script below : */1 * * * * /usr/bin/unison /dev/null 2&1 and default profile like below : root = /var/www root = ssh://web02.example.com//var/www auto=true batch=true confirmbigdel=true fastcheck=true group=true owner=true prefer=newer silent=true times=true So in every minutes will syncronized www folder . My problem are : I upload file with size bigger than 10 MB to www from client with user1 permission where www folder is user1 owner. file in processing uploading then unison running in that minute and suddenly file upload owner changed to root:root When I editing file in www folder then I save when unison running, file owner changed to root:root where should be user1:user1 Is there anyone know about this problem?

    Read the article

  • configure server and create website without any control panel

    - by Emad Ahmed
    i am trying to configure my new server without cpanel i've installed php/mysql/apache And it's now working fine if you visit the server ip http://46.166.129.101/ you'll see the welcome page i've configured my dns too my nameserverips file [root@server]# cat /etc/nameserverips 46.166.129.101=ns1.isellsoftwares.com 46.166.129.101=ns2.isellsoftwares.com if you visit this link http://ns1.isellsoftwares.com you'll see the welcome page too!! but if you visit isellsoftwares.com you'll see ( 'Firefox can't find the server at www.isellsoftwares.com.' ) Now my question is: How to create an account for this domain on the server?? i've tryied to add virtualHost tag in apache <VirtualHost *:80> ServerAdmin [email protected] ServerAlias www.isellsoftwares.com DocumentRoot /var/www/html/issoft ServerName isellsoftwares.com ErrorLog logs/dummy-host.example.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> it still not working ... i've added named file for this domain (( isellsoftwares.com.db )) ; Zone file for isellsoftwares.com $TTL 14400 isellsoftwares.com. 86400 IN SOA ns1.isellsoftwares.com. elsolgan.yahoo.com. ( 2012031500 ;Serial Number 86400 ;refresh 7200 ;retry 3600000 ;expire 86400 ;minimum ) isellsoftwares.com. 86400 IN NS ns1.isellsoftwares.com. isellsoftwares.com. 86400 IN NS ns2.isellsoftwares.com. isellsoftwares.com. 14400 IN A 46.166.129.101 localhost 14400 IN A 127.0.0.1 isellsoftwares.com. 14400 IN MX 0 isellsoftwares.com. mail 14400 IN A 46.166.129.101 www 14400 IN CNAME isellsoftwares.com. ftp 14400 IN A 46.166.129.101 cpanel 14400 IN A 46.166.129.101 whm 14400 IN A 46.166.129.101 webmail 14400 IN A 46.166.129.101 webdisk 14400 IN A 46.166.129.101 ns1 14400 IN A 46.166.129.101 ns2 14400 IN A 46.166.129.101 but it still not working !!!!! So, what else i should do??

    Read the article

  • How to automaticaly mount luks-partition only when disk is plugged in

    - by Frederick Roth
    I have the following scenario: I want to automatically backup some data from my Laptop(Fedora Core 17) to a external encrypted(luks) hard disk. The disk can be opened by a key file, which lies on the also encrypted root partition of my laptop. The hard disk is attached to my docking station and therefore only "present" when I am at home (which is approximately 1/2 of the time the Laptop runs) I have everything set up the way I want it with one exception. I don't get a decent way to mount the hard disk automatically at boot if and only if it is present. If I add it to crypttab and fstab without noauto it tries to mount it at boot and takes a lot(!) of time and error messages when it is not present. If I add noauto, well it does not mount automatically ;) Is there a way to configure luks/crypttab to do the following: check whether the disk is present if yes: decrypt/mount if no: just don't

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • Monitor open files limits, etc

    - by marcog
    We've been hitting the max open files limit on a few services recently. There are also a bunch of other limits in place. Is there a way to monitor how close processes are to these limits so we can be alerted when it's time to either up the limits or fix the root cause? On the same note, is it possible to view a log of these events so we know when a crash occurs it's because of hitting one of these limits?

    Read the article

  • how to make a php crontab silent

    - by BandonRandon
    I set up a crontab in Cpanel to run every min. It's working great but I don't want an e-mail every min. I have a second cron tab that runs every day. I would like the responce of this tab. Is there a way to tell the crontab to be silent or only e-mail on error? I have: * * * * * php /home/public_html/folder/file.php 2>&1 The last bit 2>&1 I added because i thought it would make it silent. From the Cpanel Docs: You can have cron send an email everytime it runs a command. If you do not want an email to be sent for an individual cron job you can redirect the command's output to /dev/null like this: mycommand /dev/null 2&1

    Read the article

  • send outgoing email via postfix from mail client

    - by Ey Jay
    I have installed postfix on my ubuntu that is hosted on digitalocean. What I want to do is. With my smtp server setup, I want to use it to send mail from my email client. I don't need to receive, I just need to send. I can telnet example.com 25 successfully, I received the email in my inbox, but when I tried using in a email client. smtp: example.com:25 user: smtp1user password: smtp1userpassword I get an error that says "Server doesn't respond. Try changing the port." I dont know how to proceed.

    Read the article

  • How to reverse-i-search back and forth?

    - by m-ric
    I use reverse-i-search often, and that's cool. Sometime though when pressing Ctrl+r multiple times, I pass the command I am actually looking for. Because Ctrl+r searches backward in history, from newest to oldest, I have to: cancel, search again and stop exactly at the command, without passing it. While in reverse-i-search prompt, is it possible to search forward, i.e. from where I stand to newest. I naively tried Ctrl+shift+r, no luck. I heard about Ctrl+g but this is not what I am expecting here. Anyone has an idea?

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >