Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 431/914 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • What is wrong with those crontabs?

    - by Guillaume Boudreau
    I want my projectors to Power On before the mall opens, and Power Off when the mall closes. So I created crontab entries (that I placed in /etc/cron.d/mall), but today (Thu Nov 22 18:58:29 EST 2012 is the current date on that server), the power-off.sh script got executed at 17:20 (see syslog excerpt below). Being Thu, Nov. 22, I would have expected the power-off.sh script to be executed at 21:20, per the 4th crontab line below. Why did power-off.sh execute at 17:20? What is wrong with my crontab entries? Content of /etc/cron.d/mall: 40 9 22-30 Nov Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 22-30 Nov Sun myuser /usr/local/projectors/power-on.sh 20 18 22-30 Nov Mon-Wed myuser /usr/local/projectors/power-off.sh 20 21 22-30 Nov Thu-Fri myuser /usr/local/projectors/power-off.sh 20 17 22-30 Nov Sat-Sun myuser /usr/local/projectors/power-off.sh 40 9 1-22 Dec Mon-Sat myuser /usr/local/projectors/power-on.sh 40 10 1-22 Dec Sun myuser /usr/local/projectors/power-on.sh 20 21 1-22 Dec Mon-Fri myuser /usr/local/projectors/power-off.sh 20 17 1-22 Dec Sat-Sun myuser /usr/local/projectors/power-off.sh syslog excerpt: $ grep power-off.sh /var/log/syslog Nov 22 17:20:01 lanner-ubu-c2d CRON[23007]: (myuser) CMD (/usr/local/projectors/power-off.sh)

    Read the article

  • Looking for a "light" compositing manager for GNOME

    - by detly
    I have an HP Pavilion DM3 (graphics is nVidia GeForce G105M), running Debian Squeeze with GNOME 2.30. My preference for DE is Gnome + Metacity + Nautilus. I'd like to use Docky, but it requires compositing. So I'm looking for a relatively "light" compositing manager. I realise that "light" is ambiguous, but I basically want something that won't chew through my notebook's batteries because of CPU or GPU usage. I know that Metacity is capable of compositing, but as far as I'm aware it's still testing. Some people report that it's smooth and lightweight, others claim that it eats up processor time. I've also seen references to a problem with nVidia, but no actual details. I'm not averse to Compiz, but I haven't used it before and I don't know what to expect in terms of "weight." And maybe there's something else I haven't heard of. So can anyone recommend anything? Or dispel my idea that Metacity is not the right tool for the job? (Originally posted on GNOME forums.)

    Read the article

  • GlassFish v2.1 -- getting Application Client and Eclipselink to work together?

    - by Nick
    We are trying to use Eclipselink 1.1 with Glassfish v2.1. Following the instructions on: http://wiki.glassfish.java.net/Wiki.jsp?page=FaqEclipseLinkGlassFishV2 I adapted the instructions for the appclient script on linux by adding the lines: APPCPATH=$APPCPATH:$AS_INSTALL/lib/eclipselink-1.1.1.jar export APPCPATH to the appclient shell script. This however is not working. On running the application client (using Glassfish's webstart), I get the error: WARNING: "IOP00810257: (MARSHAL) Could not load class org.eclipse.persistence.indirection.IndirectList" Anyone else succeed in getting GF v 2.1 to work with eclipselink? or any ideas on what I might be doing wrong? I found this bug report: http s://glassfish.dev.java.net/issues/show_bug.cgi?id=8204 (New users can't post more than 1 link, so remove the space between 'http' and 's'.) Where Tim Quinn (tjquinn) said: App client container support for persistence is not yet in place I think this refers only to Glassfish v3, and it should be working in Glassfish v2. Is this correct? I'm working on the assumption that this will work once the ACC knows where to find the eclipselinks jar. Thanks in advance, Nick.

    Read the article

  • Weblogic WLST classpath

    - by lepricon28
    When I run the WLST script .sh script to set the env as follows why can't I see the updated path when I do echo? [linbox2 bin]$ ./setWLSEnv.sh CLASSPATH=/directory/ols_wls/patch_wlss1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_oepe1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/patch_ocm1031/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/directory/ols_wls/jrockit_160_14_R27.6.5-32/lib/tools.jar:/directory/ols_wls/utils/config/10.3/config-launch.jar:/directory/ols_wls/wlserver_10.3/server/lib/weblogic_sp.jar:/directory/ols_wls/wlserver_10.3/server/lib/weblogic.jar:/directory/ols_wls/modules/features/weblogic.server.modules_10.3.2.0.jar:/directory/ols_wls/wlserver_10.3/server/lib/webservices.jar:/directory/ols_wls/modules/org.apache.ant_1.7.0/lib/ant-all.jar:/directory/ols_wls/modules/net.sf.antcontrib_1.0.0.0_1-0b2/lib/ant-contrib.jar: PATH=/directory/ols_wls/wlserver_10.3/server/bin:/directory/ols_wls/modules/org.apache.ant_1.7.0/bin:/directory/ols_wls/jrockit_160_14_R27.6.5-32/jre/bin:/directory/ols_wls/jrockit_160_14_R27.6.5-32/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/java/j2sdk1.4.2_11/bin/bin:/home/oracle/bin:/directory/wls_olwcs/jdk160_14_R27.6.5-32/bin:/directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin Your environment has been set. [linbox2 bin]$ export CLASSPATH [linbox2 bin]$ export PATH [linbox2 bin]$ echo $PATH /usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/usr/java/j2sdk1.4.2_11/bin/bin:/home/oracle/bin:/directory/wls_olwcs/jdk160_14_R27.6.5-32/bin:/directory/ccanywhere81/bin:/directory/oracle/oracle/product/10.2.0/client_1/bin [linbox2 bin]$

    Read the article

  • should i and how do i backup my database for a webapp that is hosted on amazon ec2 server?

    - by user8184
    I set up an amazon ec2 instance using ubuntu server edition. I install LAMP stack on it. I did up a php web app running on mysql. I have not officially launched, but I need to know this before launching. Should I backup my database data? If so, how should I do it as cost effective as possible? Previously for another web app, i wrote a perl or bash script (cannot remember) that will be executed by cron on a daily basis. The script will then backup the database into a single .sql file and send as email attachment to my gmail account. That web app was on shared hosting hence, I was quite sure i needed to do backup of my database. My files are on git repo so I am not worried about that. Please advise. I am totally unfamiliar with AWS. Only know as much as setting up an account. That is all. Thank you.

    Read the article

  • Files deleted. What could have happened?

    - by jjfine
    I'm having a weird issue today. I was writing and testing out some simple cgi scripts this morning when I realized that I couldn't run them from one of the other computers on the (windows) network. So I had my network admin come in and take a look at what was going on. A few minutes later a co-worker came in and told me that a bunch of files he was working with as well as a bunch of others (all *.c files) on the network drive got deleted. He also noticed some strange apache_dump_500.log.txt files in the same directories where the files got deleted. The apache_dump_500.log.txt files all look like this: REDIRECT_HTTP_ACCEPT=*/*, image/gif, image/x-xbitmap, image/jpeg REDIRECT_HTTP_USER_AGENT=Mozilla/1.1b2 (X11; I; HP-UX A.09.05 9000/712) REDIRECT_PATH=.:/bin:/usr/local/bin:/etc REDIRECT_QUERY_STRING= REDIRECT_REMOTE_ADDR=<my computer's local ip> REDIRECT_REMOTE_HOST= REDIRECT_SERVER_NAME=<my computer's domain url> REDIRECT_SERVER_PORT= REDIRECT_SERVER_SOFTWARE= REDIRECT_URL=/cgi-bin/trojan.py I looked and I don't have any trojan.py in my cgi-bin folder. And all my apache logs are clean. Windows event logger seems to not have any traces of what happened either. My httpd.conf: http://pastebin.com/Yny2Yh8v I think we've got some kind of virus that added this trojan.py file to my cgi-bin, ran the script, and deleted the script and any traces from the logs. Is this a thing that happens? Any ideas whatsoever would be much appreciated!

    Read the article

  • How to use a common library of environment variables among different languages?

    - by JDS
    We have three main languages with which we perform system tasks: Bash, Ruby, and PHP, and Perl. Four, four main languages. We use managed environment variables to provide authorization info that automated scripts need. For example, a mysql user account and password. We'd like to use one single managed file to maintain these variables. In some instances, for example, in cron, these environment variables are not available. They are made available in CLI scripts because we source the env file in everyone's profile. But something like cron doesn't do that. On the CLI, when the env file is sourced, any given script can access those variables. Bash has them directly, PHP in $_ENV, ruby in ENV, etc. We can't source the file into non-Bash scripts, because most languages implement shell commands by running them in a subshell. We considered parsing the Bash, converting to the script's lang, and running the equivalent of "exec(parsed_output)" on the resulting strings. What is a good solution to providing managed environment vars to scripts running in cron, or similar?

    Read the article

  • Scripting a permanent CTRL / CAPS swap in Gnome?

    - by Duncan Bayne
    I have a bash script that I use to configure a vanilla Ubuntu (10.10 Maverick Meerkat) installation to be exactly the way I want it. I make extensive use of gconftool-2 to configure the desktop, set up shortcut keys, etc. Now, I'm trying to swap the CTRL and CAPS keys. I have found two ways of doing this: In Gnome, go to System - Preferences - Keyboard - Layout - Options and make the change in there. This works well, but I don't know how to script this; the setting doesn't seem to be stored in the usual place as I can't find it with gconf-editor. Add the line setxkbmap -option "ctrl:swapcaps" to my .bashrc file. That works too, until I suspend the machine & then resume it. At that point the CTRL and CAPS behaviour return to normal, until I cause .bashrc to be run again by opening a new shell. This behaviour has been reported as a bug in RedHat. Could someone please suggest a way of switching those keys that is both permanent, and can be scripted? I'm sure I must be missing something obvious here ...

    Read the article

  • Cronjob not Running

    - by Pete Herbert Penito
    I have a bash script that looks like this: #!/bin/sh PID=`ps faux | grep libt | awk 'NR==2{print $2}'` STATUS=`ps faux | grep libt | awk 'NR==2{print $1}'` if [ "$STATUS" = "ec2-user" ]; then echo "libt already killed" else sudo kill $PID echo "libt was killed" fi sleep 5 cd /home/ec2-user/libt sudo ./libt I have saved this file as restart.sh and when I run it like ./restart.sh, it does what its supposed to (kills the libt process and restarts it). However, now I am trying to automate the process by using cron. So I made a cron job that I want to run every 6 hours that looks like this 0 */6 * * * /home/ec2-user/restart.sh When I run "crontab -l" I can see this print so I know it's been added properly. I should mention that the service does not have the ability to be restarted, (like "service ... restart") the process ID needs to be found, killed and then the start script needs to be ran. I have found that this cronjob is not working, I'll log onto the box and I can tell by looking at the logs that no restart has occurred. What am I doing wrong? What can I do to troubleshoot? Any advice would help, this is my first cron job :) Thanks!

    Read the article

  • SQL Server Unattended Install through SSH

    - by Samuel
    I'm trying to install SQL Server from the command line through Cygwin open-ssh. The install works when I log onto the server as Administrator and execute the script through a Cygwin shell, but the install doesn't work when I SSH into the machine using Administrator's credentials and run the exact same command. I've already verified that the SSHD process is running as the Admistrator, and I've verified that the install script is indeed starting under Administrator. Is there something different with the terminal in SSH vs. the Cygwin terminal on the machine that would cause this problem? Specifically what's failing is Sql Server install runs for a while then hangs with a MSI error 1622. "Error opening installation log file. Verify that the specified log file location exists and is writable." If I run both installs, I've noticed that they have different authentication id's in ProcMon, but they have the exact same command line parameters. There has to be something in SSH that is causing permissions issues... Any ideas?

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • Send command through PuTTY automatic login

    - by Arthur
    I am using the following to login automatically to a remote server and then run commands listed in a commands.txt, like this: C:\path to\putty.exe -ssh adreese.ip -l user -pw Password -m C:\Path to\command.txt commands.txt contains the following: wakeonlan -i broadcast adress Macadress However, when I try to do so a new window for PuTTY appears, but it closes and exits instantly after login. As a result, I cannot see the output of the command(s). After a several tests, it appears that the command is not execute , cause my computer doesn't "wake on lan". I don't understand what's going on here ? I cannot use the plink.exe program cause I cannot make connection with public key ( too much distant site for doing all the registration keys in putty ) Can someone help me with this ? Or can i use another program to make ssh connection and send command with script from a windows os? Edit : I also try to make a bash file in the distant server with the same command and execute it from the session like this : C:\path to\putty.exe -ssh adreese.ip -l user -pw Password \home\user\script.sh Ihave the same problem... Need help please : /

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Issue with kernel boot [OVH SERVER]

    - by Conner Stephen McCabe
    Trying to install OpenVZ kernel on Centos 6.3, Yes my kernel is installed i can see it in the /boot folder, yes it is Rhel6 and yes it is all up to date, i checked this with yum update. My issue comes when i reboot my server with that kernel set as the default, it doesn't load, below i shall put a copy of my grub.conf file and my menu.lst file. Grub.conf: default=0 timeout=5 title vzkernel (2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/sda1 initrd /initramfs-2.6.32-042stab057.1.img title linux centos6_64 kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 root=/dev/sda1 ro root (hd0,0) Now i shall paste in Menu.lst; # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/mapper/vg_stock-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title Linux OpenVZ (vmlinuz-2.6.32-042stab057.1) root (hd0,0) kernel /boot/vmlinuz-2.6.32-042stab057.1 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=l$ initrd /initramfs-2.6.32-042stab057.1.img title CentOS (2.6.32-71.el6.x86_64) root (hd0,0) kernel /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 ro root=/dev/mapper/vg_stock-lv_root rd_LVM_LV=vg_stock/lv_root rd_LVM_LV=vg_stock/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFO$ initrd /initramfs-2.6.32-71.el6.x86_64.img # dummy text Somebody mentioned something about OVH having added a script which changes the kernel settings or something, and suggested that we either remove the script or reinstall using a VNC, but we don't know how to go about doing either of these? Really would be great if you guys could help. Thanks in advance.

    Read the article

  • Whats the best cloud backup solution for a small scale server envoirnment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing.

    Read the article

  • How to extract attachments from Exchange 2003 database

    - by John
    I have an ancient Exchange 2003 server that I'm getting ready to retire. All user accounts have been migrated to Google Apps for Business, so no new mail is being sent or received on the server. There are less than 50 accounts on the server, but some are very large so that the whole Exchange database is between 10 and 20 GB. The largest account has over 100,000 messages. I believe that in the migration to Gmail, some attachments were not migrated. For peace of mind, I'd like to get the attachments out of the Exchange database. The only way I know of to do this is to set up a 2nd computer with Outlook on it, set up one of the accounts, and then sync the whole mail history and get the attachments out that way. Is there something simpler that I can do? Here are two possibilities: An Exchange attachment retrieval tool/script that pulls attachments for all accounts directly out of the Exchange database. An Exchange PST exporter tool/script that will export PST files for all accounts so that I can just load the PST files into Outlook at will.

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • Log connections to program

    - by Zac
    Besides for using iptables to log incoming connections.. Is there a way to log established inbound connections to a service that you don't have the source to (suppose the service doesn't log stuff like this on its own)? What I'm wanting to do is gather some information based on who's connecting to be able to tell things like what times of the day the service is being used the most, where in the world the main user base is, etc. I am aware I can use netstat and just hook it up to a cron script, but that might not be accurate, since the script could only run as frequently as a minute. Here is what I am thinking right now: Write a program that constantly polls netstat, looking for established connections that didn't appear in the previous poll. This idea seems like such a waste of cpu time though, since there may not be a new connection.. Write a wrapper program that accepts inbound connections on whatever port the service runs on, but then I wouldn't know how to pass that connection along to the real service. Edit: Just occurred to me that this question might be better for stackoverflow, though I am not certain. Sorry if this is the wrong place.

    Read the article

  • Outlook 2007 panes keep moving when changing resolution

    - by SilverbackNet
    This problem is really bugging one of our users ever since he got a larger monitor. Now that the monitor has a different resolution than his laptop, every time he unplugs it to go home, the three Outlook panes get all jumbled up. The navigation is huge, the list is shoved over, and the reading pane is almost smushed out of existence, the the opposite when he comes back in and the reading pane fills the screen. He's sick of adjusting it every day. He always runs it maximized, for maximum reading area. Keeping the application within a 1024x768 window wouldn't really be an option for him. Is there any way built into Outlook to automatically adjust pane sizes when the resolution changes? If not, is there a third-party app that can help, or a way to script the changes into the registry somehow? (I can do running the script whenever the screen state changes.) If this is fixed in 2010 I might be able to convince the other admin that this is a good enough reason to allow it (which will require a new beta version of our archiving software).

    Read the article

  • Nginx: Loopback connection via PHP's getimage size crashes server (Magento's CMS)

    - by Alex
    We were able to trace down a problem that is crashing our NGINX server running Magento until the following point: Background info: Magento Backend has a CMS function with a WYSIWYG editor. This editor loads some pictures via a controller in magento (cms/directive). When we set the NGINX error_log level to info, we get the following lines (line break inserted for better readability): 2012/10/22 18:05:40 [info] 14105#0: *1 client closed prematurely connection, so upstream connection is closed too while sending request to upstream, client: XXXXXXXXX, server: test.local, request: "GET index.php/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL,,/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:9024", host: "test.local" When checking the code in the debugger, the following call does never return (in ´Varien_Image_Adapter_Abstract::getMimeType()` # $this->_fileName is http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif` # $_SERVER['REQUEST_URI'] = http://test.local/admin/cms_wysiwyg/directive/___directive/BASEENCODEDIMAGEURL list($this->_imageSrcWidth, $this->_imageSrcHeight, $this->_fileType, ) = getimagesize($this->_fileName); The filename requests is an URL to the same server which is requesting the script a link to a static .gif that is not existing. Sample URL: http://test.local/skin/adminhtml/base/default/images/demo-image-not-existing.gif When the above line executed, any subsequent request to the NGNIX server does not respond any more. After waiting for around 10 minutes, the NGINX server starts answering requests again. I tried to reproduce the error with a simple test script that only calls getimagesize() with the given URL - but this not crash. It simple leads to an exception saying that the URL could not be loaded (which is fine as the URL is wrong)

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Which scripting language to use to asynchronously ssh into equipment, run several commands, parse the output, and save to a file on my computer?

    - by Fujin
    There are several points I'd like to stress in my question. I'd like to login by asynchronously ssh'ing into our infrastructure equipment. Meaning, I do not want to connect to only one device, do all the tasks I need, disconnect, then connect to the next device. I want to connect to several devices at once in order to make the process as fast as possible. By equipment I mean 'infrastructure equipment' and not servers. I say this because I will not have the luxury of saving files to the device then transferring them to myself with scp or another method. The output of the scripts that are run will have to be saved directly to my computer. The output of the commands that are run will need to be cleaned up and parsed. Also I want the outputs of each device to be combined into one nice and neat file, not a separate file for each device. This will all be done from a linux box, using ssh, into devices that all use linux'ish proprietary OSes. My guess is the answer to my question will either be a Bash, Perl, or Python script but I figured it wouldn't hurt to ask and to hear the reasons why one way is better than another. Thanks everyone. EXTRA CREDIT: With you answer, include links to resources that will help create the script I described in the language that you suggested.

    Read the article

  • reaching constant video bitrate by using ffmpeg

    - by sebastian
    We use ffmpeg and a transcoding script for transcoding and want to make some batch files which we can use for transcoding. For example I use a parameter called video_kbit and if I am writing in 30000 it should reach 30 Mbit. Of course if I use 6000 as parameter it should reach 6 MBit as well, so I have one script which reaches every video bitrate I want. As my settings are now, I only reach 18.1 Mbit. Only when I use 15000 as a parameter I am reaching my goal for a constant video bitrate of 15 MBit. If I use 8000 as parameter I get 10.1 MBit as a result. So under 15000, I get a higher bitrate and over 15000, I get a lower bitrate than I want. My presettings are: ffmpeg -threads "4" -i "$2" -f mp4 -c:v libx264 -crf 1 \ -bufsize 30000k -maxrate ${FC_PARAM_video_kbit}k \ -acodec libfaac -ac 2 -ab ${FC_PARAM_audio_kbit}k -ar 44100 \ -pix_fmt yuv420p -vf scale=${FC_PARAM_width}:${FC_PARAM_height} -y "$3" And I am using these parameters: FC_PARAM_video_kbit = 30000 FC_PARAM_audio_kbit = 192 FC_PARAM_width = 1920 FC_PARAM_height = 1080 I have tried using a higher bufsize and using profile:v and level settings, but nothing got me near the constant video bitrate of 30000 Mbit. Do you guys have any ideas or suggestions for a better way to reach my goal?

    Read the article

  • Wordpress on Apache is redirecting all https to http

    - by Krist van Besien
    I have a problem with a wordpress site on a server I admin. I don't know anything about wordpress however. My problem is that we want the site to be accessed over https, bot somehow all requests to https:// URLs are answered by the server with a 302, redirecting to http. The wordpress site itself is configured to use https, and we see that in the pages that are generated the links are all https links. In the apache config there are no rewrite rules and no redirects. However, any request to a https:// URL is answered with a redirect to the equivalent http URL. And I really would like to know where these redirects are coming from, what is generating these redirects. I've increased the loglevel on the webserver to DEBUG, but did not get any info there. I tried to enable debug logging in wordpress per the recipy I found here: http://codex.wordpress.org/Debugging_in_WordPress But did not get a debug.log file in the directory where one should appear. I'm really at a loss here, and need to fix this urgently. Any hints as where to start looking? Apache is 2.2.14 on Ubuntu. There are several other virtual hosts on this server, using php and https without any problem... Edit: I created a small info.php script and dropped that in the webservers' root. Calling this yields the output of the script, no redirect is generated. This suggest that it's not the webserver, but wordpress that is doing it. A second thing I noticed is that the redirect comes with several cookies, one of which has "httponly" set. Could that be it?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >