Search Results

Search found 20360 results on 815 pages for 'capture output'.

Page 484/815 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • nginx not returning 304 on cached content

    - by Don H
    I'm using nginx as a reverse proxy with an Apache back-end handling some PHP files. The files return the right expiry headers and proxy_cache does a good job of caching them, but I've noticed that the cached content returns a 200 on every refresh, when it might be more efficient to return a 304 on the cached files. The files in question are generated by PHP. The urls do not have .php in them as they've been prettified. Any idea why nginx might not be returning 304 on repeated visits to a cached PHP output? To clarify: It's using proxy_cache for caching dynamic PHP pages (not static html pages generated by PHP). I'm setting expires headers in the PHP file of time + 24 hours. With that in mind, I was hoping nginx would be able to then return 304s on its cached versions during that 24 hour window.

    Read the article

  • VMware Player 5.0 or VMware Workstation 9.0 after upgrade to Ubuntu 12.10

    The upgrade process Upgrading Ubuntu 12.04 to latest version 12.10 - aka Quantal Quetzal - is straight forward and you only need to follow the offical upgrade instructions. Short version on the console looks like this: sudo do-release-upgrade This will update the repository entries, and start the upgrade process. After some minutes or hours of download and installation, you have to reboot your system once to get the new kernel loaded. As time of writing, I'm on '3.5.0-17-generic'. And as with any modification of the kernel version, you have to compile the necessary kernel modules to get VMware Player or Workstation up and running. Usually, this happens the first time you try start your VMware software and that's it. Well, again not so this time. Getting the kernel patch Luckily, the community over VMware is very active and you can get a new kernel patch in the online forums here. Get the download and put in a folder have write permissions. Then you extract the archive on the console like so: tar -xjvf vmware9_kernel35_patch.tar.bz2 Then you change into the newly created folder: cd vmware9_kernel3.5_patch/ And you execute the available shell script as root (superuser) like so: sudo ./patch-modules_3.5.0.sh This will stop any running instances of VMware software, patches the source files and runs the compile process for your active environment. This might take some time depending on your machine, and once completed you can start VMware Player or Workstation as previously. In case that you are going to apply the patch again, the script will simply quit with the following output: /usr/lib/vmware/modules/source/.patched found. You have already patched your sources. Exiting You might remove the .patched file in case that you upgraded/changed your kernel and you need to apply the patch again. Disclaimer: The patch is "as-is" and the patcher is originally created by Artem S. Tashkinov, and later modified by An_tony. Please refer to the VMware forum in case of questions or problems. There are also patches available for older versions of VMware Player or Workstation.

    Read the article

  • Changing languages rapidly causes Linux to crash.

    - by eZanmoto
    So I'm running Xmonad on my college computer (which runs Kubuntu) and whenever I leave my desk, instead of using x-screensaver which is incredibly buggy and slow, I just change to another workstation, open a terminal and change language to a language which uses symbols instead of letters, and then change back using an aliased command. For example, my .profile has the lines alias qwer="setxkbmap jp" alias *******="setxkbmap ie" where ******* is my password, using japanese characters. Changing languages seems to be much faster than running x-screensaver. The problem: rapidly changing languages seems to crash Linux; it just won't accept input (and it's not because the language hasn't changed back, nothing is output to the console). I can't use Ctrl+Alt+F1..F7, I can't "raise the elephants", anything, it just won't work. I'm just wondering, is this a known issue, and if so, is there something I can do about it?

    Read the article

  • Redirect physical keyboard input to SSH

    - by Dimme
    I'm having a raspberry pi running debian linux and I have an RFID reader connected to it. The RFID reader behaves like a keyboard. Every time I scan a tag it types then number of the tag and then carriage return. My problem is that I want to redirect the output of the RFID reader to my SSH session. That means anything that is typed to the physical keyboard of the pi should be displayed in my SSH window. I have tried with: cat /dev/tty0 but it wont work because the user is not logged in. Is there a way to disable the login screen after the pi boots and then redirect all input through SSH?

    Read the article

  • Partitions mixing up

    - by anon
    I am trying to install ubuntu alongside my windows 7. The problem is that ubuntu is not detecting all of my partitions and basically clubs together many of them. The same thing is done by using GParted. However this problem does not arise while I am using Windows - 7. I cant paste the image of GParted since I dont have the required reputation... I think this could be due to stray GPT data but am not sure how to take care of it. Can someone help me figure this out ? The output of fdisk -l is as follows Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x20000000 Device Boot Start End Blocks Id System /dev/sda1 63 2047 992+ 42 SFS /dev/sda2 * 2048 206847 102400 42 SFS /dev/sda3 206848 146802687 73297920 42 SFS /dev/sda4 146802688 625140399 239168856 42 SFS However actually I have 4 partitions along with 25 gb unallocated space that I had thought to use for Ubuntu installation.

    Read the article

  • 3 Monitors on a Notebook

    - by Rihan Meij
    I would like to use 3 screens on my Dell Inspiron 1720 So On the laptop built in screen have that as one, and then have 2 more screens. The catch is, that I want to play racing games with this set-up. So that my main screen is the focus area (the front window if you will) and the other 2 screens will be used for peripheral vision, on the side. The software that I use (LFS.net) does support multiple screens. However the notebook can have the main screen on, and another external screen. So I would need to split this "second" monitor output, to 2 screens, the one to the left of the main monitor, and the other one to the right. Is this possible? Is there perhaps a external card / docking station solution that could help with this? Any advice or ideas is greatly appreciated. Best Regards Rihan

    Read the article

  • Stream tar.gz file from FTP server

    - by linker
    Here is the situation: I have a tar.gz file on a FTP server which can contain an arbitrary number of files. Now what I'm trying to accomplish is have this file streamed and uploaded to HDFS through a Hadoop job. The fact that it's Hadoop is not important, in the end what I need to do is write some shell script that would take this file form ftp with wget and write the output to a stream. The reason why I really need to use streams is that there will be a large number of these files, and each file will be huge. It's fairly easy to do if I have a gzipped file and I'm doing something like this: wget -O - "ftp://${user}:${pass}@${host}/$file" | zcat But I'm not even sure if this is possible for a tar.gz file, especially since there are mutliple files in the archive. I'm a bit confused on what direction to take for this, any help would be greatly appreciated.

    Read the article

  • Restart an in-use NFS server without interruption (within timeout)

    - by zebediah49
    I have a bunch of compute clients working on jobs, saving output data to a NAS machine. All machines are centos 6.2. They mount it via automount NFS, with a timeout of 1200 (default config). The NAS machine needs to be restarted. If I can restart the machine within that 1200s (20 minute) window, will the clients just block on IO until it comes back up? A minor interruption (pause) in service is ok, as long as it doesn't cause the running processes to error out. If necessary I could loop through and SIGSTOP all job processes, restart and resume them -- I just don't want to break the open file handles. How can I run a restart like this without killing processes with open files?

    Read the article

  • smbclient -L host works. ping host doesn't work. What is missing

    - by DrorCohen
    I upgrade my ubuntu desktop to 13.10. When I say upgrade I mean installed on a new partition from scratch (old partition is available if To the problem: I'm trying to ping a host (Drobo-FS server) by it's netbios name. I get "Unknown Host". However running smbclient -l HostName - give me all the output in the world. Stracing the ping I can it tries to use resolv.conf (expected fail) and then when accessing mdns stuff it fails (no mdns.allow file) and exits. Here's the host line from /etc/nsswitch.conf: hosts: files wins mdns4_minimal [NOTFOUND=return] dns mdns4 I've added wins right after files (and also tried before dns. Nothing helps. Reboot after every change. What am I missing?

    Read the article

  • lshw not showing network

    - by triunenature
    Output: {User}@{Computer}:~$ sudo lshw -class network {User}@{Computer}:~$ Another Test: {User}@{Computer}:~$ lspci 00:00.0 RAM memory: NVIDIA Corporation MCP61 Memory Controller (rev a1) 00:01.0 ISA bridge: NVIDIA Corporation MCP61 LPC Bridge (rev a2) 00:01.1 SMBus: NVIDIA Corporation MCP61 SMBus (rev a2) 00:01.2 RAM memory: NVIDIA Corporation MCP61 Memory Controller (rev a2) 00:02.0 USB controller: NVIDIA Corporation MCP61 USB 1.1 Controller (rev a3) 00:02.1 USB controller: NVIDIA Corporation MCP61 USB 2.0 Controller (rev a3) 00:04.0 PCI bridge: NVIDIA Corporation MCP61 PCI bridge (rev a1) 00:05.0 Audio device: NVIDIA Corporation MCP61 High Definition Audio (rev a2) 00:06.0 IDE interface: NVIDIA Corporation MCP61 IDE (rev a2) 00:07.0 Bridge: NVIDIA Corporation MCP61 Ethernet (rev a2) <<---- Network Card???? 00:08.0 IDE interface: NVIDIA Corporation MCP61 SATA Controller (rev a2) 00:08.1 IDE interface: NVIDIA Corporation MCP61 SATA Controller (rev a2) 00:09.0 PCI bridge: NVIDIA Corporation MCP61 PCI Express bridge (rev a2) 00:0b.0 PCI bridge: NVIDIA Corporation MCP61 PCI Express bridge (rev a2) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 01:05.0 FireWire (IEEE 1394): LSI Corporation FW322/323 (rev 70) 02:00.0 VGA compatible controller: NVIDIA Corporation G96 [GeForce 9500 GT] (rev a1) If you look at 00:07.0 I believe that is the network card. However lshw doesnt show it. I mainly need information on network speed 10MBpS/100MBsP/1000MBpS Though knowing why my system isn't working would be nice.

    Read the article

  • Cherokee high virtual memory usage even after disabling I/O Cache

    - by nidheeshdas
    hi all I've Ubuntu 10.04LTS 64-bit running on a openvz container and Cherokee 1.0.8 compiled from source. The virtual memory usage of cherokee-worker is around 430 MB even after disabling I/O cache from Advanced - I/O Cache - NOT enabled. Is this issue particular to openvz? Because many people reported to have successfully reduced virt memory usage by disabling io cache. htop output: http://imgur.com/z5JEL.jpg (newbies not allowed to post image.) thanks in advance. nidheeshdas

    Read the article

  • Exceptions from automongobackup, yet script completes

    - by chakram88
    I am using automongobackup to, well, automate the backups of mongodb. output from the script (to STDERR) has the following exceptions (but the backup completes, and the dump files are created) ###### WARNING ###### STDERR written to during mongodump execution. The backup probably succeeded, as mongodump sometimes writes to STDERR, but you may wish to scan the error log below: exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: HostAndPort: bad port # exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed exception: connect failed I know that the Host & Port are correct. If I run mongodump --host=127.0.0.1:27017 --journal (which is the effective command from automongobackup based on the options set and my reading of the src code) everything runs clean without any error reporting and the dump files are created as expected. Why would automongobackup report connection errors, even tho it does create the dump files, yet a straight call to mongodump does not? Debian 6.0 Lenny (from Linode image: Latest 3.2 (3.2.1-x86_64-linode23)) AutoMongoBackup VER 0.9 mongodb v 2.0.2

    Read the article

  • Enlarge partition on SD card

    - by chenwj
    I have followed Cloning an SD card onto a larger SD card to clone a 2G SD card to a 32G SD card, and the file system is ext4. However, on the 32G SD card I only can see 2G space available. Is there a way to maximize it out? Here is the output of fdisk: Command (m for help): p Disk /dev/sdb: 32.0 GB, 32026656768 bytes 64 heads, 32 sectors/track, 30543 cylinders, total 62552064 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e015a Device Boot Start End Blocks Id System /dev/sdb1 * 32 147455 73712 c W95 FAT32 (LBA) /dev/sdb2 147456 3994623 1923584 83 Linux I want to make /dev/sdb2 use up the remaining space. I try resize2fs /dev/sdb after dd, but get message below: $ sudo resize2fs /dev/sdb resize2fs 1.42 (29-Nov-2011) resize2fs: Bad magic number in super-block while trying to open /dev/sdb Couldn't find valid filesystem superblock. Any idea on what I am doing wrong? Thanks.

    Read the article

  • How to remove a directory which looks corrupted

    - by hap497
    I am using Ubuntu 9.10. When I examine a directory, it shows as '?' for user/ownership. How can I remove it? -rw-r--r-- 1 hap497 hap497 1822 2010-01-28 22:48 IntSizeHash.h d????????? ? ? ? ? ? .libs/ -rw-r--r-- 1 hap497 hap497 194 2010-02-25 12:12 libwebkit_1_0_la-BitmapImage.lo I have tried rm and sudo rm but get an error: $ sudo rm -Rf .libs rm: cannot remove `.libs': Input/output error Thank you for any pointers.

    Read the article

  • USB ports not working on Xubuntu 12.04 LTS

    - by Zchpyvr
    Basically, my USB ports on my IBM Thinkpad T43 have stopped working most of the time. Sometimes, they mount and appear in Nautilus, but other times, they aren't recognized. The timeline of events on this laptop for the past few months: Started having problems after using a USB port hub. The port would sometimes work but would be fixed with the occasional reboot. Re-partitioned/Expanded my Xubuntu partition (I have a Windows XP/Xubuntu dual boot). Now the majority of the time, the USB fails to recognize devices. In addition, the few times they are recognized, the device may suddenly disconnect. Things I've noticed: The devices still receive power from my computer (I can charge my ipod..etc..) I can't understand dmesg outputs. I don't know if lsusb is telling me anything useful. My dmesg output is here: http://pastebin.com/KdNxHcFC Things start to get weird at the bottom of the file. And my lsusb is: $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

    Read the article

  • loading a heightmap as texture in shader

    - by wtherapy
    I have a height map of 256x256, containing, foreach cell, not only height as a normal float value ( not 0-1 ) and also 2 gradient values ( for X and Y ), also as normal float values ( not 0-1 ). I have uploaded the texture via normal texture loading: glEnable( GL_TEXTURE_2D ); glGenTextures( 1, &m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glBindTexture( GL_TEXTURE_2D , m_uglID ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGB32F, unW + 1, unH + 1, 0, GL_RGB, GL_FLOAT, pvBytes ); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_LINEAR); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DEBUG_OUTPUT("Err %x\n", glGetError()); as a parenthesis, the debug output is: Err 500 Err 0 Err 0 Err 0 Err 500 Err 500 Err 0 Err 0 pvBytes is a 256x256 array of typedef struct _tGradientHeightCell { float v; float px; float py; } TGradientHeightCell, *LPTGradientHeightCell; then, m_ugl_HeightMapTexture = glGetUniformLocation(m_uglProgram, "TexHeightMap"); I load it via: glEnable(GL_TEXTURE_2D ); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D , pTexture->GetID()); glUniform1i(m_ugl_HeightMapTexture, 0); in shader, I just access it: uniform sampler2D TexHeightMap; vec4 GetVertCellParameters( uint i, uint j ) { return texture( TexHeightMap, vec2( i, j ) ); } vec4 vH00 = GetVertCellParameters( i, j ); My problem is that, when passing negative values in one of the values in TGradientHeightCell ( v, px, py ), the texture is corrupted. I need the values to be passed exact as I have them in memory. Any help appreciated.

    Read the article

  • How do I use LibreOffice's 3d transitions in Impress?

    - by Lvkz
    How can I get the 3D transitions working on Impress? I got a presentation coming soon, and as a requirement of the course the professor want us to use transitions on our "Power Point" chapter, obviously I have been using LibreOffice in every exercise but the native transitions are kind of lame, so when I install the newer version of Ubuntu, always install the extra package to the transitions - I had installed the 3D package: libreoffice-ogltrans 1:3.4.3-3ubuntu2 In previous versions of Ubuntu and worked perfectly, but for some reason is not working in this release. I got LibreOffice 3.4.3, Ubuntu Oneiric Ocelot (11.10) and my hardware is not relevant because I had it working before on previous releases. I know is not critical, but for my class is a pretty important deal, and can be a perfect opportunity to show the class that the cool stuff are not only in Windows. As a recomendation of one of Eliah Kagan, I'm putting the output of: sudo lshw -C video *-display:0 description: VGA compatible controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 07 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:f6c00000-f6ffffff memory:e0000000-efffffff ioport:efe8(size=8) *-display:1 UNCLAIMED description: Display controller product: Mobile 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 07 width: 64 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:f6b00000-f6bfffff And I'm not using Unity - it don't there anyways -, I'm using instead Gnome Shell.

    Read the article

  • ATI 9550 shows up as laptop in displays after update to 12.04, how do I fix this?

    - by D_H
    My guess is this is on here somewhere but I have searched and even tried looking at bunch of other similar video problems. My ATI 9550 shows up as laptop in displays after update to Ubuntu 12.04, how do I fix this? I found the following command on another post sudo lshw -c video. I get this when I run that command: *-display:0 UNCLAIMED description: VGA compatible controller product: RV350 AS [Radeon 9550] vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 66MHz capabilities: agp agp-3.0 pm vga_controller bus_master cap_list configuration: latency=32 mingnt=8 resources: memory:c0000000-cfffffff ioport:c000(size=256) memory:e5000000-e500ffff memory:e4000000-e401ffff *-display:1 UNCLAIMED description: Display controller product: RV350 AS [Radeon 9550] (Secondary) vendor: Hynix Semiconductor (Hyundai Electronics) physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 66MHz capabilities: pm cap_list configuration: latency=32 mingnt=8 resources: memory:d0000000-dfffffff memory:e5010000-e501ffff" This way more info than the command showed in he other post and as far as I can tell right. This doesn't look to me like a laptop video would list? I also see this command xrandr, it reports this: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 1024, maximum 1280 x 1024 default connected 1280x1024+0+0 0mm x 0mm 1280x1024 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 This is what shows in displays for resolutions but only the 1280x1024 works the others produce tearing in the video. Also I should have mentioned 3D mode does not work. I have tried ATI/AMD drivers the new one won't load and older ones won't work. I found out the newer driver no longer supports the 9550.

    Read the article

  • Create Adjustable Depth of Field Photos with a DSLR

    - by Jason Fitzpatrick
    If you’re fascinating by the Lytro camera–a camera that let’s you change the focus after you’ve taken the photo–this DSLR hack provides a similar post-photo focus processing without the $400 price tag. Photography tinkers at The Chaos Collective came up with a clever way of mimicking the adjustable depth-of-field adjustment effect from the Lytro camera. The secret sauce in their technique is setting the camera to manual focus and capturing a short 2-3 second video clip while they rotate the focus through the entire focal range. From there, they use a simple applet to separate out each frame of the video. Check out the interactive demo below: Anywhere you click in the photo shifts the focus to that point, just like the post processing in the Lytro camera. It’s a different approach to the problem but it yields roughly the same output. Hit up the link below for the full run down on their technique and how you can get started using it with your own video-enabled DLSR. Camera HACK: DOF-Changeable Photos with an SLR [via Hack A Day] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Kubuntu guest on Windows 8.1 Hyper-V won't shut down completely

    - by DarkMoon
    I've got a Windows 8.1 Professional laptop with Hyper-V installed, and a Kubuntu 14.04 Desktop VM. When I shutdown the Kubuntu VM, most of the time it gets to the logo screen, and just sits there. It's not frozen, because I can see the glow around the logo brighten and fade. I have installed the four Hyper-V modules, and lsmod shows them all running fine before the shutdown. Also, once it's stuck on the logo screen, if I send a CTL-ALT-DEL to the VM, it restarts immediately. Does anyone have any idea where I'd begin troubleshooting this? UPDATE: I've disabled the startup and shutdown screens, and now can see this output when it's stopped. Hopefully this sheds more light on the problem.

    Read the article

  • How come I cannot make this file executable (chmod permissions)?

    - by bappi48
    I downloaded Android Development Tool for linux (ADT) and placed it in home directory. After unzipping the files, when I double click the "eclipse" executable file; the eclipse works perfectly fine. But If I unzip the ADT in a different directory, in my case directory E: (is shown when I boot in windows 7) There double clicking the same "eclipse" executable file does not run eclipse. It shows error message: Could not display /media/Software/00.AndroidLinux/ADT/eclipse/eclipse. There is no application installed for executable files. Do you want to search for an application to open this file? If I press yes in the Dialog, it finds "Pypar2" which is not my solution. I found that the "eclipse" file permission is following -rw------- 1 tanvir tanvir 63050 Feb 4 19:05 eclipse I tried to change the permission by "chmod +x eclipse" , but no use. This command does not change the file permission at all in this case. what should I do? Relevant output of cat /proc/mounts: /dev/sda6 /media/Software fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 Please not that I'm new to Ubuntu and still learning day by day.

    Read the article

  • Crontab stopped unexpectedly

    - by naka
    I have following entries in the crontab: 0 0 * * * /mnt/voylla-production/releases/20131111011431/script/rubber cron --task util:rotate_logs --directory=/mnt/voylla-production/releases/20131111011431/log 0 4 * * * /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_daily.sh 0 2 * * 6 /mnt/voylla-production/releases/20131111011431/voylla_scripts/cj_saturday.sh I worked fine until today. It didn't run as scheduled after a capistrano deploy, didn't get a mail either. It worked fine earlier, and I am unable to understand what wrong. The only change that was made was the deploy, but I think it should not affect the cron. I tried using pgrep cron to see if crons is working. It gives 904 as output. Could someone please help. Thanks

    Read the article

  • Graphing/charting of CPU utilisation [on hold]

    - by Peter
    So nagios can be good at graphing particular resource utilisation or other metrics, but I'm looking for a tool that can output a chart or other graphical representation of how much CPU time/CPU utilisation /all/ services on a server are currently consuming. I think New Relic could probably achieve this to an extent, but I was wondering if there was a popular open source app used for this. In case I am explaining this in a bad way, my actual problem is that I have a shared server with suexec enabled (ie. httpd cgi running under multiple user accounts). I'd like to know which users are using the most CPU during periods of the day.

    Read the article

  • Verify linux user passwords

    - by zero_r
    Hi there I got a linux server that has several dozen users. I also have the cleartext password for every user (i know - bad security). I would like to know if the passwords are correct. Since the users are all ftp users and have the nologin shell, I cannot just write a script to check if login works. How can I do a local check on passwords? Script output could look like this: $ check_userpw < user_pw_list.txt user1 ok user2 ok user3 mismatch! user4 ok Thanks

    Read the article

  • What do I need to use an XBOX 360 with my PC monitor and speakers?

    - by heishe
    I've been thinking about getting an XBOX 360, mostly for games on XBLA or some exlusive titles, since they're relatively cheap now to get used. But I'm a student and live in a small apartment that has no TV, and no place to put a new TV (money to buy one wouldn't be a problem), so I've been thinking to use the console with my PC monitor and my speakers. My monitor only has HDMI and VGA input (no direct DVI), so I'm guessing I somehow need to split the audio and video signals coming from the XBox (or does the XBox have a direct 720p VGA output + external connectors for my speakers?). What do I need to make this happen?

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >