How can I prevent screensaver to run on windows xp? I can't disable it because of some security software, but if there is a small program that can simulate some keypress every 2-3 min that should do the job.
As a webmaster, sometimes I want to do the host reseller job, but I am not sure if this is a good choice, is there anyone has done this? I am looking for some experience sharing. Thanks.
I'm trying to write an upstart file for OpenConnect. The task is pretty simple, but I'm stuck because I don't want to provide the username and password in a config file, but prompt the user to provide them each time.
The upstart file, placed in /etc/init/openconnect.conf is
exec /usr/sbin/openconnect --script=/etc/vpnc/vpnc-script my-gw.example.com
However, when I execute
start openconnect
the process is backgrounded immediately and I get no chance to provide input.
How can I make this upstart job ask the user for input?
I have my job IMAP account set up in TB 3.0.3.
It's becoming very annoying that in group emails where I'm in the CC or TO list I hit "Reply To All" and I find myself in the recipients list. So when I send the email I also get a copy of my own email.
I haven't found where to disable or modify this.
This doesn't happen with my Gmail account (also set up in TB)
I'm trying to get an OSX Lion Server to provide a static route to its clients (all OSX Lion) over DHCP. I can't get the client to actually apply the static route.
So far, I've managed to get the DHCP server (BOOTPD) to actually serve the DHCP OPTION 33 (static_route) on the DHCP offers by editing /etc/bootpd.plist and adding something like:
<key>dhcp_option_33</key>
<data>[some base64 goes here]</data>
.. and restarting the DHCP service.
On the client I've managed to get the client to actually request the dhcp option by modifying and adding option 33 to the DHCPRequestedParameterList key:
<key>DHCPRequestedParameterList</key>
<array>
... keys snipped for brevity ...
<integer>33</integer>
</array>
.. and rebooting the client. This makes the client request the static_route option from the DHCP server ( i can see the proper output in ipconfig getpacket en0 ) but it doesn't actually apply the rule.
Has anyone ever succeeded in applying static_route options on OSX clients through DHCP?
I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64
The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log:
in php-fpm log:
WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start
in nginx log:
ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream
From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers.
https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash.
I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it.
+++ UPDATE +++
Now I have been doing some debugging in 1 of the scripts that is playing up.
If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec()
command.
When that is ran, it crashes. If i echo'test';exit; just above that line, it
echo's correctly, if i do it below that line, php crashes.
Which means it's that line 808 which causes the crash.
So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm
which also uses curl_exec, but that runs just fine.
So I started to dig deeper into that query from the facebook script to see what
values the $opts array contains from line 806.
Output of that array is: http://pastebin.com/Cq9ffd3R
What the problem is, I still have no clue :(
I have 4 drives, 2x640GB, and 2x1TB drives.
My array is made up of the four 640GB partitions and the beginning of each drive.
I want to replace both 640GB with 1TB drives.
I understand I need to
1) fail a disk
2) replace with new
3) partition
4) add disk to array
My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later?
Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive?
fdisk info:
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xe3d0900f
Device Boot Start End Blocks Id System
/dev/sdb1 1 77825 625129281 fd Linux raid autodetect
/dev/sdb2 77826 121601 351630720 83 Linux
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc0b23adf
Device Boot Start End Blocks Id System
/dev/sdc1 1 77825 625129281 fd Linux raid autodetect
/dev/sdc2 77826 121601 351630720 83 Linux
Disk /dev/sdd: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x582c8b94
Device Boot Start End Blocks Id System
/dev/sdd1 1 77825 625129281 fd Linux raid autodetect
Disk /dev/sde: 640.1 GB, 640135028736 bytes
255 heads, 63 sectors/track, 77825 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xbc33313a
Device Boot Start End Blocks Id System
/dev/sde1 1 77825 625129281 fd Linux raid autodetect
Disk /dev/md0: 1920.4 GB, 1920396951552 bytes
2 heads, 4 sectors/track, 468846912 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
I have a Windows 2003 server, a whole load of PDFs on it that need to be accessed from various computers, both on the local network and not, and including mobile devices, and files that have to be sent to it. Where do I start? The most important thing (after getting the job done) is security.
Every time I burn a CD using Nero (or anything else) on my Windows XP machine, the entire system locks up for like 5 seconds. It happens when the burner starts making noise. Then when it comes back, it starts burning and does a great job of it. What is this lag and how can I stop it?
We are using Exchange 2007 for our mail. In our configuration, we need to add an alias to each users mailbox. When we do, the Edge server, another Exchange 2007 box, will reject the alias with a User Unknown error until the next morning.
I seem to recall that in Exchange 2003, you could force an update from the Management console, but I can not find a way in 2007. It is obvious that a sync job is scheduled to run each night, but I cannot find it.
I'm looking for a higher-performance build for our 1RU Dell R320 servers, in terms of IOPS.
Right now I'm fairly settled on:
4 x 600 GB 3.5" 15K RPM SAS
RAID 1+0 array
This should give good performance, but if possible, I want to also add an SSD Cache into the mix, but I'm not sure if there's enough room?
According to the tech-specs, there's only up to 4 total 3.5" drive bays available.
Is there any way to fit at least a single SSD drive along-side the 4x3.5" drives? I was hoping there's a special spot to put the cache SSD drive (though from memory, I doubt there'd be room). Or am I right in thinking that the cache drives are simply drives plugged in "normally" just as any other drive, but are nominated as CacheCade drives in the PERC controller?
Are there any options for having the 4x600GB RAID 10 array, and the SSD cache drive, too?
Based on the tech-specs (with up to 8x2.5" drives), maybe I need to use 2.5" SAS drives, leaving another 4 bays spare, plenty of room for the SSD cache drive.
Has anyone achieved this using 3.5" drives, somehow?
I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs.
The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine.
However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands.
Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc.
There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub".
I tried
qsub --version
qsub: 1.2 2010/8/17
I am not sure what I am actually running when I invoke qsub on that cluster!
My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?
=========
I am developing a automated system, which consists of 3 parts: mysql, bash and launchd.
Bash script takes folders of work related stuff, zips, archives and puts info about them into database that is located on a local MAMP server.
Everything works as expected when I run the script from terminal. But when I use Launchd to automatically run this script, it functions without errors and it does not put the values into database.
I've tried to make logs of returned messages, but the logs end up being empty as the command has run the way it was supposed to.
Any help would be appreciated!
.plist contents
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.adevo.ari.zip</string>
<key>ProgramArguments</key>
<array>
<string>/Volumes/Archive-Plus/B-ARCHIVE-PLUS/ZZ_UTILITY_FOLDER/Compress.sh</string>
</array>
<key>Nice</key>
<integer>1</integer>
<key>StartInterval</key>
<integer>120</integer>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
I made this .plist file just by searching the web.
Goal is to have a collection of dot files (.bashrc, .vimrc, etc.) in a central location. Once it's there, Puppet should push out the files to all managed servers.
I initially was thinking of giving users FTP access where they could upload their dot files and then having an rsync cron job. However, it might not be the most elegant or robust solution. Wanted to see if anyone else had some recommendations.
Possible Duplicate:
How can I handle emailed job control files for SQL Server?
how to write batchfile/storeproc for cheking the mail in the inbox using jobmanager
I run a cron job that requests a snapshot from a remote webcam at a local address:
wget http://user:[email protected]/snapshot.cgi
This creates the files snapshot.cgi, snapshot.cgi.1, snapshot.cgi.2, each time it's run.
My desired result would be for the file to be named similar to file.1.jpg, file.2.jpg. Basically, sequentially or date/time named files with the correct file extension instead of .cgi.
Any ideas?
My question is, is there a reason for this not to work?
Details: I have two 500 Gb drives, and my motherboard RAID support, so I created a RAID1 array and booted from a Linux live medium. I then listed the disks and, apart from the obvious /dev/sda, /dev/sdb, etc. there was /dev/md126 which, I figured, was the mirrored "virtual" drive. Its size was 475 Gb; I had seen that the size of the array would be smaller than 500 Gb when I was creating it, so no surprise there. I did cfdisk /dev/md126, created the necessary partitions and chose write. It's been about half an hour now, I think. It doesn't seem like it's ever going to finish. The only thing about cfdisk in dmesg is that it's "blocked for more than 120 seconds".
Doing fdisk -l /dev/md126 in another terminal I see all three partitions I created and a note that "Partition 1 does not start on a physical sector boundary". The table is lost after reboot, though.
I tried to partition /dev/sda individually, and it worked, the table was written in about a second. The "not on a physical sector boundary" message is there, too.
EDIT: I tried fdisk on /dev/sda, then there were no messages about sector boundaries. After a reboot, I am able to use mkfs on /dev/dm126p1, etc. fdisk shows that /dev/md126 has the same partitions as /dev/sda (but /dev/sdb doesn't have any).
But at some point ("writing superblock and filesystem accounting information") mkfs is also blocked. Using it on sda1 results in a "partition is used by the system" error.
What can be the problem?
EDIT 2: I booted a freshly updated system from a pendrive and was able to create partition table and filesystems on /dev/md126 without any apparent problems. Was it an issue with the support of the hardware? My MB is Asus P9X79.
Hi,
My mother has some serious issues using Windows (viruses, spywares, and so on) and I seriously think about setting up Ubuntu as a replacement. (That would ease my "job" as well)
The only concern I have is, is there anything to edit .docx (or .xlsx, .pptx, ...) documents on Linux ? Last time I tried OpenOffice (was 3 years ago), it was only able to open "old" MS Office documents (.doc, .xls, ...).
Thank you very much for your answers !
I want to use highly secure encryption for zipped files via Linux/Ubuntu using a command line terminal, what is the best command line tool to get this job done?
zip -e -P PASSWORD file1 file2 file3 file4
Or
7za a file.7z *.txt -pSECRET
What encryption is used and how secure is it?
We have a CISCO hardware load balancer with two web servers behind it. We'd like to force some URLs to only be served by one of the machines.
Firstly, is the job of the load balancer? or would a better approach be create a subdomain such as http://assets.example.com which would be automatically be routed to one of the servers?
I don't even know where to begin to be honest.
Trying to use an external API that requires SSL connections, I discover that SSL in needed on cURL, but this (apparently) requires PHP to be reinstalled and compiled with cURL / SSL support.
Not really experienced with compiling PHP, and I'm not sure if our server even has make or build, the only luck I've had is with rpm's before.
This really isn't in my job description. Any help most most welcome!
I need dual screen for my web developer job but when I do illustrations I prefer to work on a single screen to avoid the stretching of the workspace which affects tablet's precision.
Is there a way to make my tablet work only on my primary screen and, at the same time, use mouse for both screens? I've looked into my tablet's preferences and haven't found it.
I use Windows XP, Bamboo Fun A5, ATI Radeon X 1050.
Thanks in advance.
"# /etc/modules: kernel modules to load at boot time."
my question is when and where the module loading job done?
my first guess is some init scripts in /etc/init.d/ but grep got none. then i think it might be the init ramdisk, but after decompress it, i found conf/modules which is different with /etc/modules.
any idea? thanx.
I have installed FreeNAS on a test server, with 3x 1Tb drives. They are setup in raidz. I tried to offline one of the disks (from the FreeNAS web-ui), and the array became degraded, as I think it should.
The problem is with the array becoming unaccessible after that. I thought a raid like that should be able to run fine with one of the disks missing. Atleast very soon after I offline'd and pulled out the disk, the iSCSI share disappeared from a ESXi host's datastores. I also ssh'd into the FreeNAS server, and tried just executing ls /mnt/raid (/mnt/raid/ being the mount point). The whole terminal froze, not accepting ^C or anything.
# zpool status -v
pool: raid
state: DEGRADED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
see: http://www.sun.com/msg/ZFS-8000-HC
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
raid DEGRADED 1 30 0
raidz1 DEGRADED 4 56 0
gptid/c8c9e44c-08e1-11e2-9ba6-001b212a83ea ONLINE 3 60 0
gptid/c96f32d5-08e1-11e2-9ba6-001b212a83ea ONLINE 3 63 0
gptid/ca208205-08e1-11e2-9ba6-001b212a83ea OFFLINE 0 0 0
errors: Permanent errors have been detected in the following files:
/mnt/raid/
raid/iscsivol:<0x0>
raid/iscsivol:<0x1>
Have I understood the workings of a raidz wrong, or is there something else going on? It would not be nice to have the same thing happen on a production system...