Search Results

Search found 24651 results on 987 pages for 'slackware linux'.

Page 429/987 | < Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >

  • What cause high CPU usage on the server during file upload

    - by bosiang
    When I try to upload a huge file size (approx 2GB), the server cpu usage goes really high. What should I do to fix this? I just use standard html form and php, for file upload. I'm sorry if I post on the wrong forum. Please point me to the right direction here is the result of "top" command during uploading 4 files (18mb, 38mb, 60mb, 33mb) 1904 apache 20 0 33504 5740 1952 R 28.3 0.2 0:02.19 httpd 1905 apache 20 0 33504 5740 1952 R 28.3 0.2 0:01.99 httpd 1903 apache 20 0 33232 6968 3060 R 28.0 0.2 0:01.98 httpd 1910 apache 20 0 33240 6020 2248 S 11.5 0.2 0:02.85 httpd 2133 root 20 0 2656 1124 896 R 1.6 0.0 0:00.71 top 1 root 20 0 2864 1404 1188 S 0.0 0.0 0:03.99 init the code for chunking, although eventhough I don't use this code (just simple file upload), it still cause that high cpu usage function sendRequest() { //clean the screen //bars.innerHTML = ''; var file = document.getElementById('fileToUpload'); for(var i = 0; i < file.files.length; i++) { var blob = file.files[i]; var originalFileName = blob.name; var filePart = 0 const BYTES_PER_CHUNK = 100 * 1024 * 1024; // 10MB chunk sizes. var realFileSize = blob.size; var start = 0; var end = BYTES_PER_CHUNK; totalChunks = Math.ceil(realFileSize / BYTES_PER_CHUNK); alert(realFileSize); while( start < realFileSize ) { if (blob.webkitSlice) { //for Google Chrome var chunk = blob.webkitSlice(start, end); } else if (blob.mozSlice) { //for Mozilla Firefox var chunk = blob.mozSlice(start, end); } uploadFile(chunk, originalFileName, filePart, totalChunks, i); filePart++; start = end; end = start + BYTES_PER_CHUNK; } } }

    Read the article

  • Gnome, open with, custom command, filename reference

    - by Tergiver
    I want to execute this custom command on a file from the Gnome File Browser: hexdump -C $f > $f.dump That would create a hexdump of the file with the file's name + .dump in the directory that the file exists in. When I say $f above I mean something that would substitute the name of the file that was opened. So I've tried "Open with", "Use a custom command". I can't get it to work. I've tried a number of symbols in place of $f. Is it even possible? Before you suggest getting a GUI hexdump program, this is just one example. I have the need to do this sort of thing for many terminal-type programs. Am I the only person on Earth who wishes for a hybrid File-Browser-slash-Command-Terminal? That would be a file browser which contained a terminal pane who's current directory always matched that of the file browser. One could execute shell commands in the context of what they were viewing in the browser.

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • How to Create Boot CD

    - by joe
    How to Create boot CD for Dual Booting System? Just consider I am having Windows and Ubuntu, Grup is boot loader. I just want to create Dual Boot CD for the same operation.

    Read the article

  • DNS caching server config problem

    - by Alex
    I have a Bind DNS caching-only server setup that is working. I am bringing up a new AD domain controller that will also be a DNS server for that AD but I don't want it responding to any DNS queries except those that are AD related. So, my goal is to leave this caching server as the primary DNS server for stations on the network and have it forward requests for the AD domain to the domain controller. My understanding is that I just need a forward zone for that domain pointing to the domain controller. However it does not seem to be working. So that leaves me to think that my caching server is not forwarding properly. For example, this AD is going to have a naming convention of hostname.mydomain.local. If I do an nslookup and specify the domain controller's IP address as the server, I can query addresses that exist in DNS on that server, such as dc1.mydomain.local. However, queries to my caching server times out (I get a response from the caching server if I query mydomain.local but none of the objects in that domain). Any suggestions? Here is my named.conf file: options { directory "/var/named"; listen-on { 192.168.0.14; 127.0.0.1; }; forwarders { ; ; }; forward first; }; zone "." in { type hint; file "db.cache"; }; zone "0.0.127.in-addr.arpa" in { type master; file "db.127.0.0"; }; //forward zone for mydomain.local zone "mydomain.local" { type forward; forwarders { 192.168.1.21; }; };

    Read the article

  • In Ubuntu I make changes to php.ini but nothing happens

    - by MrAn3
    Hi, Apache with php works well, but none of the changes I make in php.ini have effect, I've even delete all the contents of the file, then restart Apache, and run phpinfo() and surprisingly everything continues working well. The file I'm editing is the one that appears in the phpinfo() like "Loaded Configuration File". (/etc/php5/apache2/php.ini) P.S. I'm running Ubuntu 9.04 and PHP 5.2 Thanks in advance. More Details: I'm restarting with sudo /etc/init.d/apache2 restart, I've also tried sudo /etc/init.d/apache2 stop, and then start, at restarting I get: Restarting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName ... waiting apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [ OK ] "which php" did not produce any results. My installation of PHP was done using Synaptic Package Manger, choosing "Mark Packages by task" and then LAMP server. I don't have any clue of what to do...

    Read the article

  • Table 'mysql.host' doesn't exist

    - by eriktm
    100913 10:21:29 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql /usr/local/mysql/libexec/mysqld: Table 'mysql.plugin' doesn't exist 100913 10:21:29 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 100913 10:21:29 [ERROR] Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist 100913 10:21:29 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended This is the output from the log-file for mysqld I get when I try to start mysqld with the mysqld_safe command. I tried to run mysql_upgrade to correct the first error, but this command seems to require the server to be started, which is my original problem. Next, it says that the table mysql.host does not exist. I was unable to figure out what this is caused by.

    Read the article

  • How does Heartbeat determine when to switch to the secondary? Can you force it to switch?

    - by John
    I've been trying to understand exactly how Heartbeat works - I understand how when one server dies, it switches to the backup. But, for me, it also switches when the primary has a large increase in workload. But, it doesn't always switch at the same value. There doesn't seem to much information on the web about how it works. The best I've found is this article. How does Heartbeat determine when to switch to the secondary, and how does it determine when it switch back to the primary? Is this an editable setting, and can I force it to switch between one and the other? Sometimes when Heartbeat will switch to the secondary, it takes a few days or I've even seen two weeks before it switches back to the primary. This is well after the primary traffic has gone down. I'm currently using BlueOnyx, and my Heartbeat settings are: Auto Failback: on Keepalive: 1 seconds Warntime: 10 seconds Deadtime: 20 seconds Initdead: 30 seconds

    Read the article

  • What is the bash syntax to create a new directory in the directory above?

    - by mozerella
    I aim to make a script for mogrify. The mogrify command will resize images in a directory and put the resized images into a directory on the same directory level, with the same name as the work directory, but with a suffix (_a). The new directory will be moved to another collection later on. Something like this, #!/bin/bash mkdir ../n_a for file in *{.JPG|.jpg}; do mogrify -path ../n_a -resize 1200x1200 -quality 96;done I'm guessing ../ denotes the parent dir when working in a child directory, but I need help here. Edit: "n" needs to be replaced with the syntax for the working directory name. Sorry there was a typo as well third script line, should have read n not x Edit2: This script does exactly what I need and it's silent. #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p $DEST mogrify -path $DEST -resize 1200x1200 -quality 96 *.jpg *.JPG thanks to vgoff for the correct PWD syntax and cesareriva http://www.cesareriva.com/archives/722 for showing me the DEST function. Something else: ${PWD##*/}_a is not caring for spaces in the directory name and the script fails. An empty dir is created in the same dir as the images. Found it out now, it needs quotations on the $DEST too, presumably to help mkdir create the dir with a space in the name, and mogrify to write the files to the right place, like this #!/bin/bash DEST="../${PWD##*/}_a" mkdir -p "$DEST" mogrify -path "$DEST" -resize 1200x1200 -quality 96 *.jpg *.JPG

    Read the article

  • crontab still sending emails even with > /dev/null

    - by user2344668
    I have a crontab (root) that runs a script and output is set to /dev/null but I always get the emails whenever it runs. I only want to receive error emails. # Rackspace driveclient update (12pm MST) 0 12 * * * /root/scripts/driveclient-update > /dev/null The only way I can get it to turn off is to use /dev/null 2&1 but then I won't get error emails. This is happening on three different CentOS servers, two are 6.3 and one is 6.4. NOTE: I have read over and over that /dev/null is supposed to send stdout there and prevent the email if there is nothing but stdout from the script, so at works for at least some people; I cannot figure out why it is not working on these servers. Here's an example of where /dev/null is supposed to work: http://www.alphadevx.com/a/384-Suppressing-Cron-Job-Email-Notifications

    Read the article

  • grub2 error: out of disk

    - by Carl Smotricz
    I'm trying to make a 250G USB hard disk Ubuntu-bootable on a Compaq nc6220 laptop. I've removed all other disks, so /dev/sda (the USB disk) is the only disk other than CDROM. I installed Ubuntu 9.10 to this disk from the live CD, putting the bootloader on /dev/sda . The default system couldn't be booted, and nothing I did in the Grub menu/cmdline helped. So I chrooted onto the disk and did grub-install /dev/sda. That seemed to work fine, but Grub (1.97 beta 4) keeps coming up with error: out of disk Even when I drop to the command line to do something simple like ls or help, it's always the same error message. Any hints for resolving this, please?

    Read the article

  • RSync over SSH hangs and fails with timeout

    - by tx2
    Client: Gentoo, GCC 4.3.4, RSync 3.0.9 Server: Ubuntu 10.04.4 LTS, RSync 3.0.7 Client and server connectet through is Internet, about 2Mbps. Ping is ok. RSync called on any files in any direction hangs on random file, then, after timeout, fails with: [sender] io timeout after 30 seconds -- exiting rsync error: timeout in data send/receive (code 30) at io.c(140) [sender=3.0.9] [sender] _exit_cleanup(code=30, file=io.c, line=140): about to call exit(30) In 1/10 trys is pass correctly. I've tryed to add SSH options TcpRcvBufPoll=yes, KeepAlive=yes; disable and enable rsync compression -- no changes. How can i make rsync works properly?

    Read the article

  • Sound plays on headphones and speakers with Lenovo ThinkPad L512 + Ubuntu 10

    - by Oscar Godson
    The only thing really missing from this install is this issue with the sound. I've searched all over the forums and i found one thing where you get the model and codecs and write them to a file, however, I can't seem to find what my "model" is because none of the postings have anything about Lenovo laptops. Here is the command they all asked for: Code: cat /proc/asound/card0/codec#* | grep Codec Codec: Realtek ALC269 Codec: Intel G45 DEVIBX With that info, how do I get the model, and how do I get my speakers to stop playing when headphones are plugged in. Also, I don't have any software installed like pulse audio either, so it's not that. Thanks so much to whoever can answer this... The Ubuntu forums are nearly useless... ive never gotten a correct answer back on that site.

    Read the article

  • Using remote station as original

    - by Neka
    I have 2 computers with totally same Debian, config, apps and other stuff. One at work and another at home. It's inconvenient to maintain the same configuration on these stations - upgrading OS, sync configuration, etc. Is there the way to use my home station as "host", and such a "terminal" at work? As if i have one HDD on 2 computers, but must use they own resources like an videocard and another. Looks like i need some remote tool as VNC, but this is no sessional event, I need to use "terminal" comp like original all of the time.

    Read the article

  • Multiple network cards, controlling where my traffic goes

    - by thefinn93
    This is an Ubuntu 12.04 server install. I have multiple network cards, eth0 and eth1 lets call them. eth0 is connected to the internet, and all of my traffic goes through it, until eth1 gets plugged in. Then the machine tries to send everything through eth1, which for various and sundry reasons does not go out to the Interent. The only traffic it doesn't send through eth1 is traffic on eth0's subnet. It also will not accept inbound connections on eth0 from outside of eth0's subnet. I'd like all outbound traffic to go out eth0, but I'd like incoming connections from to either card from any subnet to work.

    Read the article

  • Linux 3.12 disponible en version stable, avec des gains de performances et une réduction de la consommation d'énergie

    Linux 3.12 disponible en version stable avec des gains de performances et une réduction de la consommation d'énergieLinux Torvalds a annoncé via un message sur LKLM (Linux Kernel Mailing List) la sortie de la version stable du noyau Linux 3.12.Au menu des améliorations, un changement dans la façon de gérer la fréquence de fonctionnement du processeur de l'ordinateur (modification de l'algorithme CPUfreq governor) permettant des gains significatifs de performances et une réduction de la consommation...

    Read the article

  • Darkstat unable to show recent statistics.

    - by Caterpillar
    Hello All.. We have a debian machine running with firewall / gateway. We have deployed darkstat on it. When we installed darkstat it was showing statistics properly. After few days it stopped showing recent statistics. The data was getting appended to existing one. Please anyone tell me what could be the problem. Thanks In Advance.

    Read the article

  • Allow SFTP in iptables

    - by Kevin Orriss
    I have just purchased a VPS from linode and am going through the setup guide. I have everything running (apache2, php, mysql etc) but I am being denied access via SFTP when using fileZilla to upload a file. Now this is my second time installing the server as I missed a section out the first time. I was able to connect to my server through SFTP on filezilla the first time and the thing I missed out was adding a new user and editing the iptables in the firewall. So it would seem that the guide I have been following has blocked SFTP but allowed SSH. Here is the iptables file: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP and HTTPS connections from anywhere (the normal ports for websites and SSL). -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allow SSH connections # # The -dport number should be the same port number you set in sshd_config # -A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT All I would like is a line I need to put in there which allows SFTP over port 22. Thank you for reading this.

    Read the article

  • How to read iptables -L output?

    - by skrebbel
    I'm rather new to iptables, and I'm trying to understand its output. I tried to RTFM, but to no avail when it comes to little details like these. When iptables -vnL gives me a line such as: Chain INPUT (policy DROP 2199 packets, 304K bytes) I understand the first part: on incoming data, if the list below this line does not provide any exceptions, then the default policy is to DROP incoming packets. But what does the 2199 packets, 304K bytes part mean? Is that all the packets that were dropped? Is there any way to find out which packets that were, and where they came from? Thanks!

    Read the article

  • Permission denied but group permissions look good on redhat

    - by Tony
    I have a user ftpadmin: -bash-3.2$ id ftpadmin uid=10001(ftpadmin) gid=2525(fsg) groups=2525(fsg),10005(git) The important group to note is "git" Then I have my git repository: ls -al drwxrwxr-x 7 git git 4096 Apr 20 14:17 fsg So ftpadmin is a member of git, and git has given all permissions to people in the group. Why do I see this when I login as ftpadmin: -bash-3.2$ ls -al /home/git/ ls: /home/git/fsg: Permission denied ... Seems like I should have permission...

    Read the article

  • Can't unlock locked screen, in Ubuntu 12

    - by Camille Goudeseune
    After locking the screen (with a keystroke bound to xlock -nice 8 -mode blank), I can unlock the screen as expected, but only within a few minutes. After being locked overnight, when I hit a key (even Ctrl+Alt combos), the screen stays black with just a brief white flash across the middle of both monitors. The workaround is to ssh in from another host and restart X. Some months ago, this happened every few weeks. By now it happens almost every morning. How do I even start to diagnose this? What might I look for in log files? (The intermittency is particularly troubling.) Failing that, is there an alternative to xlock aka xlockmore? Hardware: 3-year-old HP minitower, GEForce 9800 GT, two Asus LCD monitors. Software: Ubuntu 12.04.2 LTS. Window manager awesome-wm. NVidia driver 304.88. XLock version xlockmore-5.31.

    Read the article

  • Ubunt doesn't mount one of my NTFS disks

    - by Jader Dias
    There is a mountable /dev/sda NTFS formatted (Windows disk) There is no /dev/sdb when I ls /dev (NTFS Data disk) There is a /dev/sdc which is another disk of the same model, (Ubuntu disk) I can see that Ubuntu detected this unmountable disk in the Disk Utility It states incorrectly it is unpartioned and a RAID volume. (it previously was RAID0 setup with /dev/sdc but now it is a simple volume, no RAID whatsoever) When I boot Windows 7, it uses this unmountable disk without a glitch The problem happens in both IDE and AHCI modes Ubuntu 10.04 Lucid Lynx

    Read the article

  • CentOS server. What does it mean when the total used RAM does not equal the sum of RES?

    - by Michael Green
    I'm having a problem with a virtual hosted server running CentOS. In the past month a process (java based) that had been running fine started having problems getting memory when the JVM was started. One strange thing I've noticed is that when I start the process, the PID says it is using 470mb of RAM while the 'used' memory immediately drops by over a 1GB. If I run 'top', the total RES used across all processes falls short of the 'used' listed at the top by almost 700mb. The support person says this means I have a memory leak with my process. I don't know what to believe because I would expect a memory leak to simply waste the memory the process is allocated not to consume additional memory that doesn't show up using 'top'. I'm a developer and not a server guy so I'm appealing to the experts. To me, if the total RES memory doesn't add up to the total 'used' it indicates that something is wrong with my virtual server set-up. Would you also suspect a memory leaking java process in this case? If I use free before: total used free shared buffers cached Mem: 2097152 149264 1947888 0 0 0 -/+ buffers/cache: 149264 1947888 Swap: 0 0 0 free after: total used free shared buffers cached Mem: 2097152 1094116 1003036 0 0 0 -/+ buffers/cache: 1094116 1003036 Swap: 0 0 0 So it looks as though the process is using (or causing to be used) nearly 1GB of RAM. Since the process (based on top is only using 452mb, does that mean that the kernal is all of a sudden using an additional 500mb?

    Read the article

  • disable specific PCI device at boot

    - by Rhymoid
    I've just reinstalled Debian on my Sony VAIO laptop, and my dmesg and virtual consoles all get spammed with the same messages over and over again. [ 59.662381] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 59.901732] usb 1-1.2: new high-speed USB device number 91 using ehci_hcd [ 59.917940] hub 1-1:1.0: unable to enumerate USB device on port 2 [ 60.157256] usb 1-1.2: new high-speed USB device number 92 using ehci_hcd I believe these messages are coming from an internally connected USB device, most likely the webcam (since that's the only thing that doesn't work). The only way I can seem to have it shut up (without killing my actually useful USB ports) is to disable one of the USB host controllers: # echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci_hcd/unbind This also takes down my Bluetooth interface, but I'm fine with that. I would like this setting to persist, so that I can painlessly use my virtual console again in case I need it. I want my operating system (Debian amd64) to never wake it up, but I don't know how to do this. I've tried to blacklist the module alias for the PCI device, but it seems to be ignored: $ cat /sys/bus/pci/devices/0000\:00\:1a.0/modalias pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 $ cat /etc/modprobe.d/blacklist blacklist pci:v00008086d00003B3Csv0000104Dsd00009071bc0Csc03i20 How do I ensure that this specific PCI device is never automatically activated, without disabling its driver altogether? -edit- The module was renamed recently, now the following works from userland: echo "0000:00:1a.0" > /sys/bus/pci/drivers/ehci-pci/unbind Still, I'm looking for a way to stop the kernel from binding that device in the first place.

    Read the article

< Previous Page | 425 426 427 428 429 430 431 432 433 434 435 436  | Next Page >