Search Results

Search found 32568 results on 1303 pages for 'linux pwns mac'.

Page 546/1303 | < Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >

  • Messy Filesytem : Duplicate File Removal from the command line

    - by jrause
    In debian/ubuntu I want to a) create a list of all the files in one directory tree b) do the same for a second directory tree c) compare the two lists such that, only the file NAMES are compared (i.e. just comparing the "file.txt" part so that "/home/folder/file.txt" == "/home/secondfolder/folder/file.txt) d) output a list of all the duplicates can anyone please explain how to do this using scripting languages or regex or something?

    Read the article

  • Creating rescue / install USB flash disk for CentOS

    - by wwwpanda
    For CentOS installation CDs, you can install OS, as well as booting into "rescue" mode so that you can do a chroot mount on the system partition for problem solving, even the system is installed in hardware RAID drives. How can we create a similar thing but on usb flash drive? I tried to do it with unetbootin, but when booting into the USB, eventually the CentOS setup still requires presence of CDs. Ultimately, I want to use this usb flash drive for remote disaster recovery through say HP iLo remote console / Dell iDrac etc.

    Read the article

  • Can the expect utility handle a case where the process it spawns also spawns a sub process?

    - by davidparks21
    I'm trying to use expect to handle rsync over an ssh shell, but it gets stuck. If I run my rsync command it works (simplified here): It prompts me for my password and copies files to the server: rsync -e ssh -<other_params> If I then enclose that in expect: expect -d -c "spawn rsync -e ssh -<other_params>" -c "expect password:" -c "send mypass\r" It does not execute properly, the program exists and no files are copied. Even the debug mode isn't giving many clues. My best guess is that rsync is spawning the ssh process, and the ssh process is what needs to be interacted with, but send is picking up the rsync process id and sending the input there. Any thoughts?

    Read the article

  • Feeding the kernels entropy source from other machines and/or increasing its maximum size

    - by David Spillett
    We have has a little trouble with a small box that acts as a VPN end-point and mail relay for our network, caused by the available entropy for /dev/random being too low (which causes TLS connection attempts by exim to fail). The machine doesn't do anything else, so the normal feed into the entropy pool (interrupt timings from things like disk access) is not enough. As a quick hack I've set a looping script that reads from /dev/hda at a couple of Mbyte/sec which keeps it topped up. Other than buying a hardware RNG, is there a clean way of piping data for entry from elsewhere, such as a copy of the data our file server uses for its entropy source? I've spotted several tips for using rng-tools to feed it from /dev/urandom on the same machine but that "feels dirty". Also, is it possible to increase the maximum pool size? It currently seems to max out at 3585.

    Read the article

  • FreeNAS/ZFS/Raid-Z slightly different disks

    - by muskratt
    I'm considering using FreeNAS and "recycling" some of my older 1TB disks. Two are the exact same model Western Digital while another is Seagate and the fourth is Samsung. Typically, since all disks are not equal, I'll create my arrays on a Windows-based server 1GB undersized to prevent a replacement disk not being large enough. Dell is notorious for sending replacement SATA disks of different brand---knock on wood, no problems yet. Since not all drives are created equally and they can vary a few MBs, is there a way to make the the FreeNas/ZFS/Raid-Z function in the same way I do for my Windows-based servers above? Thanks

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • javascript doesn't seem to be able to post form data (nginx server w/ php-fpm)

    - by Jones
    So the situation is like so: I have a nginx server with php-fpm installed. All is well and the site scripts and all work perfectly. I am able to use html to POST form data and it works just fine. However, There seems to be be some correlation between javascript, the POST protocol and nothing happening. I cant seem to determine the issue. Example: I have a user login widget that uses javascript on submit the fields and POST the data to a backend auth script which returns a server message that then populates the login box saying something like "Login Successful" followed by reloading the page to properly enable content. Problem is, nothing happens when you hit submit. I do know the setup works because i had it working on apache before migrating. Also if it makes any difference, the server is a Amazon EC2 instance using the Amazon AMI. I really dont know where to start looking on this one, but below is my default.conf for the server: upstream backend_get { server 127.0.0.1:80 weight=1; } upstream backend_post { server 127.0.0.1:80 weight=1; } #Main website url server { listen 80; server_name server.com; #charset koi8-r; access_log logs/host.access.log main; error_log logs/host.error.log; location / { root /usr/share/nginx/html; index index.php index.html index.htm; if ($request_method = POST) { proxy_pass http://backend_post; break; } } location ~ \.php$ { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • What can impact the throughput rate at tcp or Os level?

    - by Jimm
    I am facing a problem, where running the same application on different servers, yields unexpected performance results. For example, running the application on a particular faster server (faster cpu, more memory), with no load, yields slower performance than running on a less powerful server on the same network. I am suspecting that either OS or TCP is causing the slowness on the faster server. I cannot use IPerf , unless i modify it, because the "performance" in my application is defined as Component A sends a message to Component B. Component B sends an ACK to component A and ONLY then Component A would send the next message. So it is different from what IPerf does, which to my knowledge, simply tries to push as many messages as possible. Is there a tool that can look at OS and TCP configuration and suggest the cause of slowness?

    Read the article

  • ~/.profile does not run on startup

    - by pocoa
    I want to run some scripts at system startup, so in ~/.profile file, I've added: WORKSPACE="~/Development/workspace" alias workspace="cd $WORKSPACE" So I want this "workspace" alias to be available after the startup. Maybe it's not the right place to define these variables.

    Read the article

  • installing software configure.in

    - by ant2009
    Hello, Fedora 12 2.6.32.9-70.fc12.i686 I have downloaded kdirstat from cvs. And I want to compile and install it. However, there is no configure script file. The only file I have is a configure.in.in. How can I create the configure script file? Many thanks for any advice,

    Read the article

  • HTTPS and HTTP issue on server with SSL

    - by Asghar
    I have a site www.example.com for which i purchased SSL cert and installed. And it was working fine, I also have a subdomain with app.example.com which was not on SSL. Both www.example.com and app.example.com are on same IP address. At later we decided to put SSL only on app.frostbox.com and then i configured SSL with app.frostbox.com and it worked fine, Now the issue is that Google is indexing my site as https://www.example.com/ and when users hits the web , Invalid security warning is issued and when user allow security issue they are shown my app.example.com contents. Note: I have my SSL configuration files in /etc/httpd/conf.d/ssl.conf The contents of the ssl.conf are below. NOTE: I tried solutions in .httaccess but none of those worked. Like redirecting 301 redirects etc http://pastebin.com/GCWhpQJq

    Read the article

  • Explanation of nodev and nosuid in fstab

    - by Ivan Kovacevic
    I see those two options constantly suggested on the web when someone describes how to mount a tmpfs or ramfs. Often also with noexec but I'm specifically interested in nodev and nosuid. I basically hate just blindly repeating what somebody suggested, without real understanding. And since I only see copy/paste instructions on the net regarding this, I ask here. This is from documentation: nodev - Don't interpret block special devices on the filesystem. nosuid - Block the operation of suid, and sgid bits. But I would like a practical explanation what could happen if I leave those two out. Let's say that I have configured tmpfs or ramfs(without these two mentioned options set) that is accessible(read+write) by a specific (non-root)user on the system. What can that user do to harm the system? Excluding the case of consuming all available system memory in case of ramfs

    Read the article

  • How can I make the NetworkManager work?

    - by Yang Jy
    I am running a version of RHCE6 on my laptop, and lately I've been trying various stuff about network configuration through command line. Last night, I tried removing the NetworkManager using "yum remove NetworkManager" from the system, so that I could have more control of the network through the command line. But the result is, I didn't manage to configure the wireless connection through wpa_supplicant, and I need wireless connection during my travel to another place. So I need the wireless function back as soon as possible. I typed " yum install NetworkManager", some version installed, but I don't get to have an icon on the taskbar, and of course, the network doesn't work. The package I previously removed(about 24MB) was much larger that the one I just installed(about 2MB), so I think some dependencies must be missing. How could I install all these dependencies? Please help!

    Read the article

  • Port forwarding not working properly

    - by sudo work
    I'm trying to host a small web server from my home network; however, I have not been able to successfully port forward ports to the local server. My current network topology looks like this: Cable Modem/Router - Secondary Wireless Router - Many computers (including server) The modem/router I'm using is a Cisco (Scientific Atlantic) DPC2100, provided by my ISP. The wireless router that I'm using as the central hub to my home network is a Linksys E3000. The computer being used as a server is running Ubuntu 10.04 Server Edition. The main issue is that I can't access the server remotely, using my WAN IP address. I have port forwarded my wireless router; however, I believe that I need to somehow set my modem to bridge mode. As far as I can tell though, this isn't possible. Here are the various IP address settings: DPC2100 WAN: 69.xxx.xxx.xxx Internal IP: 192.168.100.1 Internal Network: 192.168.7.0 E3000 IP Address: 192.168.7.2 Gateway: 192.168.7.1 Internal IP: 192.168.1.1 Internal Network: 192.168.1.0 Server IP Address: 192.168.1.123 Gateway: 192.168.1.1 Now I can do an nmap at various nodes, and here are the results (from the server): nmap localhost: 22,25,53,80,110,139,143,445,631,993,995,3306,5432,8080 open nmap 192.168.7.2: 22,25,80 (filtered),110,139,445 open (ports I have forwarded in the E3000)* nmap 69.xxx.xxx.xxx: 1720 open *For some reason, I can SSH into the server at 192.168.7.2, but not view the website. Here are also some other settings: /etc/hosts/ 127.0.0.1 localhost 127.0.1.1 servername ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/apache2/sites-available/default snippet <VirtualHost *:80> DocumentRoot /srv/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> ... </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> ... </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> ... </Directory> </VirtualHost> Let me know if you need any other information; some stuff probably slipped my mind.

    Read the article

  • Migrating to ssh key authentication; implications of adding sbin's to users $PATH

    - by ancillary
    I'm in the process of migrating to key's for authentication on my CentOS boxes. I have it all set up and working, but was a bit taken aback when I noticed service (and other things) didn't work the way I was accustomed to. Even after su'ing to root, still had to call the full path for it to work (which I assume to be expected/normal behavior). I also assume this is because there are different $PATH's for root (what I was using and am used to) and the newly created, key-using user. Specifically, I noticed the sbin's of the world missing from the user path. If I were to add those paths (/sbin/,/usr/sbin/,/usr/local/sbin) to a profile.d .sh script for this new key-loving user, would: I be opening up the system in ways I shouldn't be? I be doing something I needn't do save for reasons of laziness? I create other potential problems? Thanks.

    Read the article

  • fstab and cifs mounting, possible to store authentication information outside of fstab?

    - by tj111
    I am currently using cifs to mount some network shares (that require authentication) in /etc/fstab. It works excellently, but I would like to move the authentication details (username/pass) outside of fstab and be able to chmod it 600 (as fstab can have issues if I were to change its permissions). I was wondering if it is possible to do this (many-user system, don't want these permissions to be viewable by all users). from: //server/foo/bar /mnt/bar cifs username=user,password=pass,r 0 0 to: //server/foo/bar /mnt/bar cifs <link to permissions>,r 0 0 (or something analogous to this). Thanks.

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • stat and ls show wrong file size (terabytes wrong)

    - by WolleTD
    Ok, I have a bunch of vCard files, all about 200 to 300 Bytes in size. While trying to get them archived, I wondered why that takes so long and discovered that there is one file with a wrong size. Both ls and stat are showing a size of about 8.1 Terabytes. That's amazing because my SSD is only about 250 Gigabytes in size. There are some other files with wrong sizes, too, but this is clearly the biggest one. I already gave it a fsck, but there seem to be no errors in the (ext4) filesystem. How can I get rid of this wrong size? Thanks, Wolle

    Read the article

  • DEBIAN repository signing: a step-by-step guide.

    - by jldupont
    I've got many DEBIAN repository for my projects (e.g. EPAPI, erlang-dbus etc.). It seems that now Synaptic wants those to be signed for the packages to appear by default. For the DEBIAN kung-fu masters out there, please provide me with a step-by-step guide to achieving this, please. I've googled a lot but I am still a bit confused on the subject. update: I use a Launchpad PPA now... saves me from all this trouble.

    Read the article

  • What's going on with my server? High load, lots of idle CPU time, low disk utilization

    - by Jonathan
    I run a web site and send a legitimate opt-in, daily email newsletter to subscribers. Both the web hosting and email sending are done by the same machine. I have about 100,000 subscribers who have opted in to my daily email newsletter. My PHP script did a pretty good job sending mail to all of them until fairly recently, but as the list has grown I can't keep up. When I run top, I have very high load--usually at least 6 or 7, sometimes as high as 15--even though I only have two CPUs. However, when I run sar, my CPU is idle an average of about 30% of the time. So, it seems I'm not CPU bound. When I run iostat, it seems as though I'm not disk bound because my %util for each device is very low (no more than 5%). Given that I don't seem to be CPU bound or disk bound, why is top reporting such high load? Additionally, since I don't seem to be CPU bound or disk bound, why is my email sending script not able to keep up? Here's what I see when running top: top - 11:33:28 up 74 days, 18:49, 2 users, load average: 7.65, 8.79, 8.28 Tasks: 168 total, 5 running, 162 sleeping, 0 stopped, 1 zombie Cpu(s): 38.9%us, 58.6%sy, 0.8%ni, 0.0%id, 0.7%wa, 0.2%hi, 0.8%si, 0.0%st Mem: 3083012k total, 2144436k used, 938576k free, 281136k buffers Swap: 2048248k total, 39164k used, 2009084k free, 1470412k cached Here's what I see when running iostat -mx: avg-cpu: %user %nice %system %iowait %steal %idle 34.80 1.20 55.24 0.37 0.00 8.38 Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sda 0.19 71.70 1.59 29.45 0.02 0.07 5.90 0.55 17.82 1.16 3.59 sda1 0.00 0.00 0.00 0.00 0.00 0.00 7.10 0.00 13.80 13.72 0.00 sda2 0.05 50.45 1.13 24.57 0.01 0.29 24.25 0.35 13.43 1.15 2.97 sda3 0.05 10.17 0.20 2.33 0.01 0.05 43.75 0.05 20.96 2.45 0.62 sda4 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 70.50 70.50 0.00 sda5 0.07 0.22 0.03 0.07 0.00 0.00 32.84 0.08 856.19 8.03 0.08 sda6 0.02 5.45 0.03 0.72 0.00 0.02 67.55 0.02 26.72 5.26 0.39 sda7 0.00 1.56 0.00 0.42 0.00 0.01 38.04 0.00 8.88 5.84 0.24 sda8 0.01 3.84 0.20 1.35 0.00 0.02 28.55 0.05 31.90 4.08 0.63 Here's what I see when running sar: 09:40:02 AM CPU %user %nice %system %iowait %steal %idle 09:50:01 AM all 30.59 1.01 49.80 0.23 0.00 18.37 10:00:08 AM all 31.73 0.92 51.66 0.13 0.00 15.55 10:10:06 AM all 30.43 0.99 48.94 0.26 0.00 19.38 10:20:01 AM all 29.58 1.00 47.76 0.25 0.00 21.42 10:30:01 AM all 29.37 1.02 47.30 0.18 0.00 22.13 10:40:06 AM all 32.50 1.01 52.94 0.16 0.00 13.39 10:50:01 AM all 30.49 1.00 49.59 0.15 0.00 18.77 11:00:01 AM all 29.43 0.99 47.71 0.17 0.00 21.71 11:10:07 AM all 30.26 0.93 49.48 0.83 0.00 18.50 11:20:02 AM all 29.83 0.81 48.51 1.32 0.00 19.52 11:30:06 AM all 31.18 0.88 51.33 1.15 0.00 15.47 Average: all 26.21 1.15 42.62 0.48 0.00 29.54 Here are the top handful of processes listed at the particular time I happened to run top -c: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 8180 mysql 16 0 57448 19m 2948 S 26.6 0.7 4702:26 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --pid-file=/var/lib/mysql/bristno.pid --skip-external-locking 26956 brristno 17 0 0 0 0 Z 8.0 0.0 0:00.24 [php] <defunct> 26958 brristno 17 0 94408 43m 37m R 5.0 1.4 0:00.15 /usr/bin/php /home/brristno/public_html/dbv.php 22852 nobody 16 0 9628 2900 1524 S 0.7 0.1 0:00.17 /usr/local/apache/bin/httpd -k start -DSSL 8591 brristno 34 19 96896 13m 6652 S 0.3 0.4 0:29.82 /usr/local/bin/php /home/brristno/bin/mailer.php 1qwqyb6 i0gbor 24469 nobody 16 0 9628 2880 1508 S 0.3 0.1 0:00.08 /usr/local/apache/bin/httpd -k start -DSSL 25495 nobody 15 0 9628 2876 1500 S 0.3 0.1 0:00.06 /usr/local/apache/bin/httpd -k start -DSSL 26149 nobody 15 0 9628 2864 1504 S 0.3 0.1 0:00.04 /usr/local/apache/bin/httpd -k start -DSSL

    Read the article

  • Converting an ancient RH8 system to VMware ESXi

    - by donatello
    I am curious to know what options I have to convert a very old RedHat8 machine to a virtual one on ESXi. Looking at VMware Converter it seems there's an option to login to the RH8 using SSH, and from there it will convert to the ESXi-server. That makes me a bit nervous though, exactly what is happening there? The RH8 machine is slightly critical, and if anything messes up it'll likely result in many hours extra work. :( Another option I thought of was to boot a LiveCD on RH8-system and create a raw "dd dump" of the disk. The similar method is used to restore the image, I boot a LiveCD on the VM in ESXi and use "dd" to write it to disk. Is there any other option I could use? I'm using the cheap version of ESXi, hence I have no access to the Converter BootCD so these rather cumbersome methods is the only I can think of. :)

    Read the article

  • Changing time or offsetting it in OpenVZ contained server

    - by Milad Naseri
    I am trying to run a VPS, a Debian box contained in an OpenVZ container. Obviously, I cannot use time --set or any such command, as the time must be set via the parent node. The owner of the parent node, however refuses to adjust the time (which is 30 minutes slower than the actual time). All the programs on my system, consequently, now recognized the false time and this throws a wrench in my syncing. Is there a way to possibly change the system time without interference from the container's administrator? Or perhaps, failing that, a way to make the programs "see" the time 30 minutes faster than what is reported by the container?

    Read the article

< Previous Page | 542 543 544 545 546 547 548 549 550 551 552 553  | Next Page >