Search Results

Search found 24505 results on 981 pages for 'bash script'.

Page 484/981 | < Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >

  • Why don't I have write permission to my vmware virtual network device?

    - by Robert Martin
    I want to allow my VMWare machine to force the virtual network it's on into promiscuous mode so I can play around with honeyd. I received an error message that told me to go to http://vmware.com/info?id=161 to allow this behavior. Based on their advice, I did: $ groupadd promiscuous $ cat /etc/group | grep promiscuous promiscuous:x:1002:robert $ usermod -a -G promiscuous robert $ id robert uid=1000(robert) gid=1000(robert) groups=1000(robert),....,1002(promiscuous) $ chgrp newgroup /dev/vmnet8 $ chmod g+rw /dev/vmnet8 $ ls -l /dev/vmnet8 crw-rw---- 1 root promiscuous 119, 8 2012-03-29 10:29 /dev/vmnet8 Looks like I gave RW permission to the promiscuous group, and added myself. Except that VMWare still gives me an error message that says I cannot enter promiscuous mode. To try out the group thing, I tried: $ echo "1" >/dev/vmnet8 bash: /dev/vmnet8: Permission denied That really surprised me: It makes me think that I still haven't properly given myself the correct permissions... What am I missing?

    Read the article

  • How to run some commands after booting from ArchLinux disk? Or how to change some settings in .iso before booting?

    - by Alexander Ovchinnikov
    How to install Arch Linux with traditional installer with only ssh-access to server? There is nice guide: https://wiki.archlinux.org/index.php/Install_from_SSH I try test this on my home vps: Start VPS with any linux bootable cd and login to remote server (vps) wget http://mirrors.kernel.org/archlinux/iso/latest/archlinux-2010.05-netinstall-x86_64.iso dd if=archlinux-2010.05-netinstall-x86_64.iso of=/dev/sda reboot ... I see, it works but without ssh connection... I need make script, which will send this commands after reboot: aif -p partial-configure-network (and write some information about my server ip etc.) /etc/rc.d/sshd start (need to start sshd) echo "sshd: ALL" /etc/hosts.allow (to allow me login to server, by default deny all) passwd (by default its empty, can't login via ssh with empty password) Can I edit .iso or may be /dev/sda? May be I need write script, which will start after system boot and do this things or may be I can set this settings by default and system will start with correct settings (i think its possible at least in 2. and 3.). Thank you!

    Read the article

  • Easiest linux distro creation

    - by QAH
    Hello everyone! I have been trying to create my own customized distro of linux (preferably some kind of Debian system like Debian, Ubuntu, Knoppix, etc). I want to make it specifically for playing games. This is just a pet project for me and while I know linux pretty well (bash, gcc, g++, gdb, etc), I'm not that great with knowing the kernel. So that puts making my own distro from scratch out of the question. I went on to trying to create remasters of Knoppix and it worked for me but it was a very long process since knoppix is meant to be a live CD and I was running it in VirtualBox. So what is the easiest and fastest method of making your own distro? Also, can I install ubuntu or some kind of linux to a computer make changes to it, and then make a distro off of that installation? Please help me with this, Thanks

    Read the article

  • sudo: apache restarting a service on CentOS

    - by WaveyDavey
    I need my web app to restart the dansguardian service (on CentOS) so it needs to run '/sbin/service dansguardian restart' I have a shellscript in /home/topological called apacherestart.sh which does the following: #!/bin/sh id=`id` /sbin/service dansguardian restart r=$? return $r This runs ok (logger statement in script for testing output to syslog, so I know it's running) To make it run, I put this in /etc/sudoers: User_Alias APACHE=www # Cmnd alias specification Cmnd_Alias HTTPRESTART=/home/topological/apacherestart.sh,/sbin/e-smith/db,/etc/rc7.d/S91dansguardian # Defaults specification # User privilege specification root ALL=(ALL) ALL APACHE ALL=(ALL) NOPASSWD: HTTPRESTART So far so good. But the service does not restart. To test this I created a user david, and fudged the uid/gid in /etc/passwd to be the same as www: www:x:102:102:e-smith web server:/home/e-smith:/bin/false david:x:102:102:David:/home/e-smith/files/users/david:/bin/bash then logged in as david and tried to run the apacherestart.sh. The problem I get is: /etc/rc7.d/S91dansguardian: line 51: /sbin/e-smith/db: Permission denied even though S91dansguardian and db are in the sudoers command list. Any ideas?

    Read the article

  • Weblogic 12 and the CLI

    - by Rig
    I am working with WebLogic on Fedora 19 and am attempting to use the CLI tools to no avail. It appears these were deprecated as far back as WebLogic 9 however I was assured they are still there and still functional. As it stands I have a need to use them if they are in fact functional. What appears to the case is that the weblogic jar file is not being loaded correctly to the classpath by this script after trying to manually add it to the classpath as it fails when trying to add those jars via java -classpath <path>. I've spent a lot of time so far trying to get this sorted out but I'm wondering what I may be missing here. My Java runtime is version 7, Fedora is 19, and WebLogic is 12.1. When I run env after running the provided set environment script it appears to have no impact from what I can see. (I'll add that later when I get back to that machine). I'm mostly a Windows developer so some of this is a topic I'm not well versed in. [foo@localhost bin]$ ./setDomainEnv.sh [foo@localhost bin]$ java weblogic.Admin -url t3://localhost:7001 -username <username> -password <password> HELP Error: Could not find or load main class weblogic.Admin [foo@localhost bin]$ ls -ltar total 72 drwxr-x--- 2 foo foo 4096 Jun 2 07:36 service_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 server_migration drwxr-x--- 2 foo foo 4096 Jun 2 07:36 nodemanager -rwxr-x--- 1 foo foo 1267 Jun 2 07:36 setStartupEnv.sh -rwxr-x--- 1 foo foo 1105 Jun 2 07:36 startNodeManager.sh -rwxr-x--- 1 foo foo 5765 Jun 2 07:36 startWebLogic.sh -rwxr-x--- 1 foo foo 2001 Jun 2 07:36 stopWebLogic.sh -rwxr-x--- 1 foo foo 3170 Jun 2 07:36 startManagedWebLogic.sh -rwxr-x--- 1 foo foo 2776 Jun 2 07:36 stopManagedWebLogic.sh -rwxrwxrwx 1 foo foo 14136 Jun 2 07:36 setDomainEnv.sh -rwxr-x--- 1 foo foo 2060 Jun 2 07:36 startComponent.sh drwxr-x--- 5 foo foo 4096 Jun 2 07:36 . -rwxr-x--- 1 foo foo 1726 Jun 2 07:36 stopComponent.sh drwxr-x--- 12 foo foo 4096 Jun 2 07:45 ..

    Read the article

  • python mysqldb - mysql server gone away - can't reconnect

    - by david.barkhuizen
    Hi Folks, When attempting to import a bunch of data into mysql tables using python and mysqldb, I run into the following error '2006 - mySQL Server has gone away', and then I am unable to reconnect again within the script. I am iniitially re-using a connection object across transactions ( delineated by conn.commit() ), then when I first encounter this exception, if I create a new connection by calling MySQLdb.connect(), this new connection also fails with the same exception. This error does not occur immediately, I can pump a fair amount of data into the db, but then faithfully occurs after I have inserted a couple thousand records, so roughly once the db has committed a certain transaction volume, it always falls over like this. If I rerun the script, WITHOUT restarting the db server. then it resumes where it left off, pumps in some data, then falls over again. Before recommendations to change time-out timings, does anyone know why I am not able to establish a new connection after the initial failure ? - Even if I try a couple of times waiting a couple of seconds between each. (btw, I'm running Windows 7, mysql server 5.1.48, mysqldb 1.2.3.gamma.1, python 2.6)

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • TeamViewer installed but not running on CentOS

    - by Root
    I followed these http://www.tecmint.com/how-to-install-teamviewer-8-on-linux-distributions/ steps and installed TeamViewer on my CentOS5 server without any errors in SSH but when I try to start TeamViewer I am just getting the following output but I can't find my TeamViewer username and password to try to connect to my server. root@vps [~]# teamviewer Init... Checking setup... Launching TeamViewer... root@vps [~]# teamviwer -info -bash: teamviwer: command not found root@vps [~]# /usr/bin/teamviewer -info Init... Checking setup... Launching TeamViewer... root@vps [~]# whereis teamviewer teamviewer: /usr/bin/teamviewer /etc/teamviewer root@vps [~]# /usr/bin/teamviewer -help Init... Checking setup... Launching TeamViewer... root@vps [~]# Can anyone help me finding my TeamViwer id and password to connect to my Server. Thanks.

    Read the article

  • sudo -i not behaving as expected

    - by jpdoyle
    Why sudo -i command is not setting the TERM, PATH, HOME, SHELL, LOGNAME, USER and USERNAME on my fresh Ubuntu 12.04.1 LTS as decribed in the manual? # sudo -u johnny -i echo $HOME && echo $USER /root root Using -H is not setting $HOME either. And my user does exist with a home : # cat /etc/passwd [..] johnny:x:1000:1000::/home/johnny:/bin/bash Update : Why am I having this issue? Because I am trying to create an ubuntu upstart job for multiple unicorn applications & I am using user installation of RVM + Bundle : without $HOME being properly evaluated, RVM do not find ~/.rvm.

    Read the article

  • Force Windows Local Subnet Traffic through a Gateway

    - by Beerey
    Hi all, We are attempting to route all traffic from a certain machine to a gateway. This works ok for traffic destined for subnets outside of the machine's subnet. However, traffic to machines in the same subnet as the source machine goes through an On-Link gateway in Windows. This means that the default gateway is ignored, and traffic in a subnet (for example, 192.168.50.10 - 192.168.50.11) flows. Destination Netmask Gateway Interface Metric 192.168.50.0 255.255.255.0 On-link 192.168.50.214 276 This route can be deleted from Windows, but when the machine is rebooted it always comes back. Adding a persistant static route to the gateway with a lower metric doesn't work, since it will still try the On-Link gateway after the persistant route fails. Adding each machine in a VLAN isn't an option due to the setup we have Adding a startup script to delete the gateway isn't a great option either, since users will have full admin access to the machine and might disable the script. We cannot transperantly intercept all network traffic on the subnet using Gratuitous ARPs or transparent proxying, since there are other machines on the subnet which use a different gateway The only way we have gotten it to work is by adding a persistant route to the gateway for the subnet traffic, and deleting the On-link route on reboot. The question is then. Is there a way to permanently remove this On-link route If not, is there a way to otherwise force even local subnet traffic to go through a gateway?

    Read the article

  • wget not working with domain on local machine

    - by user568829
    Basically - I have some PHP scripts that need to be run as cron jobs. Lets say the script needing to be run is: http://admin.somedomain.com/cron_jobs/get_stats If I run the script from the local machine it gives me a 404 Not Found error. So I entered the following into /etc/hosts XX.XX.XX.45 admin.somedomain.com Now wget works fine from the local machine to that domain. However when I restart Apache that domain no longer works. Here is the config for that site in /etc/apache2/sites-available NameVirtualHost XX.XX.XX.45:80 <VirtualHost XX.XX.XX.45:80> ServerName admin.somedomain.com DocumentRoot /var/www/admin.somedomain.com/ <Directory "/var/www/admin.somedomain.com"> allowoverride all Options Indexes order deny,allow allow from all </Directory> ErrorLog /var/log/apache2/admin.somedomain.com-error_log CustomLog /var/log/apache2/admin.somedomain.com-access_log combined </VirtualHost> It just goes to the default site config showing "It Works". If I take out that setting in /etc/hosts and restart apache the website at that domain works fine again. Can anyone point me in the right direction here? Thanks

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • how to bypass internal DNS?

    - by fabjoa
    This is about Ubuntu but should be pretty much the same on all Linux flavors. Let's say I add an entry to my /etc/hosts such as 127.0.1.12 facebook.com and an Apache virtual host such as <VirtualHost 127.0.1.12> ServerName facebook.com DocumentRoot /var/www </VirtualHost> when i open my browser and send a GET request to facebook.com, firefox will browse my /var/www folder. Question: How could I fetch (ie, using wget in bash) the real facebook.com domain - without erasing the entry in /etc/hosts nor my Apache VirtualHost -- IOW how could I bypass internal DNS?

    Read the article

  • Monitor Exchange Email Address and run scripts

    - by WernerCD
    Okay... Not sure how "out there" this thought is... Right now to send a pager message (aka text message), a user logs into our AS400... logs into the program... enters user name and message and hit's F10 to send. With a little looking, it seems that you can run remote commands to the AS400 via FTP. So I'm working on building a script (batch or otherwise) that, given two parameters (user, message), will FTP into the AS400 and run a remote command: c:\>ftp server user: admin password: ***** ftp> quote rcmd SNDPGRMSG TOPGR(JDOE) MSG('This is a Test') ftp> quit So... what I want to do is setup an email account on our Exchange server Monitor the account for incoming mail upon receipt of incoming mail, parse it... say for example subject is defined as "Recipient" and email text is defined as "Pager message" run a batch that uses the above mentioned TOPGR and MSG as parameters... via FTP to the AS400 mark email as "read" The main thing I'm not sure about is monitoring an exchange account and running a script on incoming emails. I'm sure what I want to do is possible... but where would I start? EDIT: Clarification The main reasons for using this four part system are logging (messages sent via this are logged and reported by the AS400 program) and the existing scheduler for redirecting pages (For example, the weekly on-call person = TOPGR(oncall) gets updated by the AS400 program). I'm also trying to remove duplicate work. If I can get this setup working, I can redirect pages from OTHER systems into this one. I then don't have to update 2, soon to be 3, systems with current phone numbers, carriers, on-call schedules, etc. System #2 and #3 can just "email" [email protected].

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • PostgreSQL timezone does not match system timezone

    - by Martin C.
    I have several PostgreSQL 9.2 installations where the timezone used by PostgreSQL is GMT, despite the entire system being "Europe/Vienna". I double-checked that postgresql.conf does not contain timezone setting, so according to the documentation it should fallback to the system's timezone. However, # su -s /bin/bash postgres -c "psql mydb" mydb=# show timezone; TimeZone ---------- GMT (1 row) mydb=# select now(); now ------------------------------- 2013-11-12 08:14:21.697622+00 (1 row) Any hints, where the GMT timezone could come from? The system user does not have TZ set and the /etc/timezone and /etc/timeinfo seem to be configured correctly. # cat /etc/timezone Europe/Vienna # date Tue Nov 12 09:15:42 CET 2013 Any hints are appreciated, thanks in advance!

    Read the article

  • FFMpeg installation problem

    - by Thomas
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • FFMpeg installation problem

    - by Thomas
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • ip namespace non-root shell

    - by user2730940
    I am trying to run ssh command to another ip namespace. I can do it right now, but it runs as root. I want to run it as a normal user. I want to know if there is a way to enter a non-root shell in another network namespace. I know you can use this to enter a root shell in another namespace: sudo ip netns exec <namespace> bash Alternatively, is there a way to run single commands as a non-root user? I know you can run commands as root with this: sudo ip netns exec <namespace> <command>

    Read the article

  • memcached append() php ubuntu - bad protocol

    - by awongh
    I am running ubuntu gutsy(7.1) , php5 and I am trying to get memcached running locally. I installed everything as per the docs: memcached daemon, php PECL extension, libevent, etc. But now I can only run half of the example script for memcached append(): <?php $m = new Memcached(); $m->addServer('localhost', 11211); $m->setOption(Memcached::OPT_COMPRESSION, false); $m->set('foo', 'abc'); $m->append('foo', 'def'); var_dump($m->get('foo')); ?> the script terminates @ append() with an RES_BAD_PROTOCOL error message. It still runs the get(). I don't know why memcached would otherwise be working fine (connect, set, get - with the correct value of 'abc') and not work for append. it also doesnt work with prepend. I believe I have the setup correct, but I am not sure. Maybe there are compatibility problems between the versions of the dependecies? thanks much

    Read the article

  • Default or fink python and lxml under 10.6.8

    - by songdogtech
    Ah, confusion. I'm trying to install a python library called lxml as needed by a python script. I've been through numerous SU quesitons and answers. I haven't been able to make much progress. I run easy_install lxml and get: Processing lxml-3.0.1-py2.6-macosx-10.6-universal.egg lxml 3.0.1 is already the active version in easy-install.pth Using /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg Processing dependencies for lxml Finished processing dependencies for lxml but when I run my script, I get: File "scraper.py", line 3, in import lxml.html File "/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/html/init.py", line 42, in from lxml import etree ImportError: dlopen(/Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so, 2): Symbol not found: _htmlParseChunk Referenced from: /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so Expected in: flat namespace in /Library/Python/2.6/site-packages/lxml-3.0.1-py2.6-macosx-10.6-universal.egg/lxml/etree.so I think that maybe I'm not using the correct python install? I've installed python with fink, but should I use OS X's python? This is in my .profile: test -r /sw/bin/init.sh && . /sw/bin/init.sh which points to the fink install. echo $PATH gives me: /sw/bin:/sw/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin:/usr/X11R6/bin Should I change that to point to snow leopard's python? (Which is 2.6.1) In Library/, there is: which are the lxml libaries I need, it appears, as well as requests. And whereis python gives me /usr/bin/python What do I do? How do I get python to use these libraries. And which python?

    Read the article

  • Why do weekly tasks created via PowerShell using a different user fail with error 0x41306

    - by Danny Tuppeny
    We have some scripts that create scheduled jobs using PowerShell as part of our application. When testing them recently, I noticed that some of them always failed immediately, and no output is ever produced (they don't even appear in the Get-Job list). After many days of tweaking, we've managed to isolate it to any jobs that are set to run weekly. Below is a script that creates two jobs that do exactly the same thing. When we run this on our domain, and provide credentials of a domain user, then force both jobs to run in the Task Scheduler GUI (right-click - Run), the daily one runs fine (0x0 result) and the weekly one fails (0x41306). Note: If I don't provide the -Credential param, both jobs work fine. The jobs only fail if the task is both weekly, and running as this domain user. I can't find information on why this is happening, nor think of any reason it would behave differently for weekly jobs. The "History£ tab in the Task Scheduler has almost no useful information, just "Task stopping due to user request" and "Task terminated", both of which have no useful info: Task Scheduler terminated "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" instance of the "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" task. Task Scheduler stopped instance "{eabba479-f8fc-4f0e-bf5e-053dfbfe9f62}" of task "\Microsoft\Windows\PowerShell\ScheduledJobs\Test1" as request by user "MyDomain\SomeUser" . What's up with this? Why do weekly tasks run differently, and how can I diganose this issue? This is PowerShell v3 on Windows Server 2008 R2. I've been unable to reproduce this locally, but I don't have a user set up in the same way as the one in our production domain (I'm working on this, but I wanted to post this ASAP in the hope someone knows what's happening!). Import-Module PSScheduledJob $Action = { "Executing job!" } $cred = Get-Credential "MyDomain\SomeUser" # Remove previous versions (to allow re-running this script) Get-ScheduledJob Test1 | Unregister-ScheduledJob Get-ScheduledJob Test2 | Unregister-ScheduledJob # Create two identical jobs, with different triggers Register-ScheduledJob "Test1" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Weekly -At 1:25am -DaysOfWeek Sunday) Register-ScheduledJob "Test2" -ScriptBlock $Action -Credential $cred -Trigger (New-JobTrigger -Daily -At 1:25am)

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • "setpci: command not found" in CentOS

    - by spoon16
    I'm trying to configure my Mac Mini running CentOS 5.5 to start automatically when power is restored after a power loss. I understand the following command has to be executed: setpci -s 0:1f.0 0xa4.b=0 When I run that command on my machine though I get bash: setpci: command not found. Is there a package I need to install via yum or something? I'm not seeing a clear answer via Google and I looked at the man page for setpci and it doesn't mention anything. Also, does this command need to be run every time the machine starts or just once?

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

< Previous Page | 480 481 482 483 484 485 486 487 488 489 490 491  | Next Page >