Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 262/667 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Apache2 Virtualhost practice config issue

    - by sisko
    I am practicing virtualhost configuration. In my /var/www directory I have created 3 directories called test1, test2 and test3 each of which has a simple index.php script in it. I:E test1/index.php etc. In /etc/apache2/sites-available/test1 I have the following configuration: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName test1 DocumentRoot /var/www/test1 <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/test1/> Options -Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All the other sites have a similar virtualHost definition. I have enabled the site(the symlink appears in sites-enabled) and I have restarted apache. However, when I visit localhost/test1, I get a 404 Error. My error log show the following message: [Wed Oct 23 06:22:52 2013] [error] [client 127.0.0.1] File does not exist: /var/www/test1/test1 I don't know why I get the double test1/test1 in the error logs. I'm trying to find the right virtualHost setup which will allow all 3 test websites to be served from their URLs I:E test1/index.php, test2/index.php and test3/index.php. Can anyone help me out, please?

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ >> It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Viability of Apache (MPM Worker), FastCGI PHP 4/5.2/5.3, and MySQL 5

    - by Adrian
    My server will be hosting numerous PHP web applications ranging from Joomla, Drupal, and some legacy (read: PHP4) and other custom-built code inherited from clients. This will be a development machine used by a dozen or so web developers and issues like fluctuating loads or particularly high load expectations are not important. Now, my question: are there any concerns I should know about when using Apache w/ MPM Worker, PHP 4/PHP 5.2/PHP 5.3 (all via FastCGI), and MySQL 5 (with a query cache of 64MB)? I have not tested the various applications extensively and I have only recently learned how to install PHP and utilize it via FastCGI (rather than mod_php, which in this case seemed impossible (considering the multiple versions of PHP and the desire to use MPM Worker over MPM Prefork)). I have come to understand that there could be concerns regarding XCache and APC, namely non-thread-safety issues where data becomes corrupted and the capability to use MPM Worker becomes null and void. Is this a valid concern? I have been using my personal testing server (running Ubuntu Server Edition 10.04 in VirtualBox) which has 2GB of RAM available to it. Here is the configuration used (the actual server will likely use a configuration more tailored to suit it's purposes): Apache: Server version: Apache/2.2.14 (Ubuntu) Server built: Apr 13 2010 20:22:19 Server's Module Magic Number: 20051115:23 Server loaded: APR 1.3.8, APR-Util 1.3.9 Compiled using: APR 1.3.8, APR-Util 1.3.9 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Worker: <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 400 MaxRequestsPerChild 2000 </IfModule> PHP ./configure (PHP 4.4.9, PHP 5.2.13, PHP 5.3.2): --enable-bcmath \ --enable-calendar \ --enable-exif \ --enable-ftp \ --enable-mbstring \ --enable-pcntl \ --enable-soap \ --enable-sockets \ --enable-sqlite-utf8 \ --enable-wddx \ --enable-zip \ --enable-fastcgi \ --with-zlib \ --with-gettext \ Apache php-fastcgi-setup.conf FastCgiServer /var/www/cgi-bin/php-cgi-5.3.2 FastCgiServer /var/www/cgi-bin/php-cgi-5.2.13 FastCgiServer /var/www/cgi-bin/php-cgi-4.4.9 ScriptAlias /cgi-bin-php/ /var/www/cgi-bin/

    Read the article

  • Fresh Install of SQL Server 2008 doesn't install managment studio. Help!

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do?

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • ubuntu not detecting CDdrives

    - by Mirage
    Ihave insatlled ubuntu 10.4 on my compuer with 6 cd drives. Now initiallyi had window server 2008 and i had to install marvel raid sata controller and then my window detected all 6 drives. Now ubuntu is detecting only 3 drives and i have not found marvell drivers for ubuntu bt i have drives for window 2008. Now my question is if i have vrtual machine inside ubuntu using vmware workstation and i install that driver. then can VM dtect thse 6 drives or host has to detect those drives first to make VMs use that Ubuntu shows this thing from terminal *-cdrom:0 description: DVD-RAM writer product: DVDRAM GSA-H10N vendor: HL-DT-ST physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/cdrom2 logical name: /dev/cdrw2 logical name: /dev/dvd2 logical name: /dev/dvdrw2 logical name: /dev/scd0 logical name: /dev/sr0 version: JL10 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-cdrom:1 description: DVD writer product: DVDRRW GWA-4164B vendor: HL-DT-ST physical id: 0.1.0 bus info: scsi@0:0.1.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/scd1 logical name: /dev/sr1 version: 1.01 serial: [HL-DT-STDVDRRW GWA-4164B1.0105/05/12 capabilities: removable audio cd-r cd-rw dvd dvd-r configuration: ansiversion=5 status=nodisc Is t detecting all drives or thise local names just same

    Read the article

  • Is Subversion(SVN) supported on Ubuntu 10.04 LTS 32bit?

    - by Chad
    I've setup subversion on Ubuntu 10.04, but can't get authentication to work. I believe all my config files are setup correctly, However I keep getting prompted for credentials on a SVN CHECKOUT. Like there is an issue with apache2 talking to svnserve. If I allow anonymous access checkout works fine. Does anybody know if there is a known issue with subversion and 10.04 or see a error in my configuration? below is my configuration: # fresh install of Ubuntu 10.04 LTS 32bit sudo apt-get install apache2 apache2-utils -y sudo apt-get install subversion libapache2-svn subversion-tools -y sudo mkdir /svn sudo svnadmin create /svn/DataTeam sudo svnadmin create /svn/ReportingTeam #Setup the svn config file sudo vi /etc/apache2/mods-available/dav_svn.conf #replace file with the following. <Location /svn> DAV svn SVNParentPath /svn/ AuthType Basic AuthName "Subversion Server" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user AuthzSVNAccessFile /etc/apache2/svn_acl </Location> sudo touch /etc/apache2/svn_acl #replace file with the following. [groups] dba_group = tom, jerry report_group = tom [DataTeam:/] @dba_group = rw [ReportingTeam:/] @report_group = rw #Start/Stop subversion automatically sudo /etc/init.d/apache2 restart cd /etc/init.d/ sudo touch subversion sudo cat 'svnserve -d -r /svn' > svnserve sudo cat '/etc/init.d/apache2 restart' >> svnserve sudo chmod +x svnserve sudo update-rc.d svnserve defaults #Add svn users sudo htpasswd -cpb /etc/apache2/dav_svn.passwd tom tom sudo htpasswd -pb /etc/apache2/dav_svn.passwd jerry jerry #Test by performing a checkout sudo svnserve -d -r /svn sudo /etc/init.d/apache2 restart svn checkout http://127.0.0.1/svn/DataTeam /tmp/DataTeam

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • rsnapshot schedule overlapping, help with backup schedule

    - by Znarkus
    Hello, I have to following configuration. rsnapshot.conf interval halfhourly 4 interval hourly 6 interval twohourly 12 interval daily 7 interval weekly 4 crontab 0,30 * * * * /usr/bin/rsnapshot halfhourly >> /var/log/rsnapshot.halfhourly.log 2>&1 5 * * * * /usr/bin/rsnapshot hourly >> /var/log/rsnapshot.hourly.log 2>&1 10 */2 * * * /usr/bin/rsnapshot twohourly >> /var/log/rsnapshot.twohourly.log 2>&1 15 3 * * * /usr/bin/rsnapshot daily >> /var/log/rsnapshot.daily.log 2>&1 20 6 * * MON /usr/bin/rsnapshot weekly >> /var/log/rsnapshot.weekly.log 2>&1 Only halfhourly is running correctly now. hourly spits out this error: rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Lockfile /var/run/rsnapshot.pid exists and so does its process, can not continue To me it seems like my 5 min space between halfhourly and hourly is too small. Is this configuration crazy? I like having backups every thirty minutes, that will probably save my ass some day. Please help me make a decent backup schedule, that doesn't clog up the system, but creates frequent enough backups. Thank you.

    Read the article

  • Windows 8 Internet Explorer proxy automation script

    - by Stefan Bollmann
    Similar to this post, I'd like to change my proxy settings using a script. However, it fails. When I am behind the proxy, IE does not connect to the internet. Here I try the first solution from craig: function FindProxyForURL(url, host) { if (isInNet(myIpAddress(), "myactualip", "myactualsubnetip")) return "PROXY proxyasshowninpicture:portihavetouseforthisproxy_see_picture"; else return "DIRECT"; } Also this test, where isInNet should surely return true does not help: function FindProxyForURL(url, host) { if (isInNet("myactualip", "myactualip", "myactualsubnetip")) return "PROXY proxyasshowninpicture:portihavetouseforthisproxy_see_picture"; else return "DIRECT"; } **This script is saved as proxy.pac in c:\windows and my configuration is* in LAN settings: No automatically detected settings, yes, use automatic config script: file://c:/windows/proxy.pac No proxy server. *(i am not allowed to post screenshots..) So, what am I doing wrong? ---------------- update -------------- However, when I set up a proxy in my LAN configurations: IE -> Internet Options -> Connections -> LAN Settings check: Use a proxy Server for your LAN Address: <a pingable proxy> Port: <portnr> everything is fine for this environment. Now I try a simpler script like function FindProxyForURL(url, host) { return "PROXY <pingable proxy>:<portnr>; DIRECT"; } With a configuration described above** I am not able to get through the proxy.

    Read the article

  • Combining URL mapping and Access-Control-Allow-Origin: *

    - by ksangers
    I am in the progress of migrating an old banner system to a new one and in doing so I want to rewrite the old banner system's URL's to the new one. I load my banners via an AJAX request, and therefore I require the Access-Control-Allow-Origin to be set to *. I have the following VirtualHost configuration: <VirtualHost *:80> ServerAdmin [email protected] ServerName banner.studenten.net # we want to allow XMLHTTPRequests Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteMap bannersOldToNew txt:/home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map # check whether a zoneid exists in the query string RewriteCond %{QUERY_STRING} ^(.*)zoneid=([1-9][0-9]*)(.*) # make sure the requested banner has been mapped RewriteCond ${bannersOldToNew:%2|NOTFOUND} !=NOTFOUND # rewrite to ads.all4students.nl RewriteRule ^/ads/.* http://ads.all4students.nl/delivery/ajs.php?%1zoneid=${bannersOldToNew:%2}%3 [R] # else 404 or something ErrorLog ${APACHE_LOG_DIR}/banner.studenten.net-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/banner.studenten.net-access.log combined </VirtualHost> My map file, /home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map, contains something like: # oldId newId 140 11 141 12 142 13 Based on the above configuration I was expecting the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Access-Control-Allow-Origin: * Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 But instead I get the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 Note the missing Access-Control-Allow-Origin header, this means the XMLHttpRequest is denied and the banner is not displayed. Any suggestions on how to fix this in Apache?

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • How to hide subfolder when using Web.config for subdomains?

    - by mc-kay
    I have FTP access to my ASP.NET Websapce (IIS 7) and I route subdomains with a Web.config in the web root folder. She looks like this: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="route www and emtpy requests" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^(www.)?example.com" /> <add input="{PATH_INFO}" pattern="^/www/" negate="true" /> </conditions> <action type="Rewrite" url="\www\{R:0}" /> </rule> <rule name="route to blog" stopProcessing="true"> <match url=".*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> <add input="{HTTP_HOST}" pattern="^blog.example.com$" /> <add input="{PATH_INFO}" pattern="^/blog/" negate="true" /> </conditions> <action type="Rewrite" url="\blog\{R:0}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> As you can see i have two folders in my root directory: "www" and "blog". When i now enter "blog.example.com" everythink is working fine, but when i click a link i will go to "blog.example.com/blog" What can I do to prevent this behavior ?

    Read the article

  • Campus VLAN Segmentation - By OS?

    - by Moduspwnens
    We've been thinking through re-arranging our network and VLAN configuration. Here's the situation. We already have our servers, VoIP phones, and printers on their own VLANs, but our problem lies with end user devices. There are just too many to lump on the same VLAN without being hammered with broadcasts! Our current segmentation strategy has them split into VLANs like this: Student iPads Staff iPads Student Macbooks Staff Macbooks Gaming devices Staff (Other) Student (Other) *Note that our network has many more iPads and MacBooks than most. Since the primary reason we're splitting them is just to put them in smaller groups, this has been working for us (for the most part). However, this required our staff to maintain access control lists (MAC addresses) of all devices belonging in these groups. It also has the unfortunate side effect of illogically grouping broadcast traffic. For example, using this setup, students on opposite ends of campus using iPads will share broadcasts, but two devices belonging to the same user (in the same room) will likely be on completely separate VLANs. I feel like there must be a better way of doing this. I've done a lot of research and I'm having trouble finding instances of this kind of segmentation being recommended. The feedback on the most relevant SO question seems to point toward VLAN segmentation by building/physical location. I feel like that makes sense because logically, at least among miscellaneous end users, broadcasts will typically be intended for nearby devices. Are there other campuses/large-scale networks out there segmenting VLANs based on end-system OS? Is this a typical configuration? Would VLAN segmentation based on physical location (or some other criteria) be more effective? EDIT: I've been told that we will soon be able to dynamically determine device OS without maintaining access lists, although I'm not sure how much that affects the answers to the questions.

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • How to set only specific nginx server block into maintenance mode programmatically

    - by Ville Mattila
    I am looking for a solution to automate one of our application's deployment process. In the beginning of deployment, I would like to programmatically set the specified server into maintenance mode and finally after the deployment has been completed, remove the maintenance mode flag from the nginx server. By maintenance mode, I mean that nginx should response with HTTP Response Code 503 to all the requests (with possible custom page). I know how to set the server block to respond with 503 code (see http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/) but the question is about how to do this programmatically and most efficiently. Two options have came to my mind: Option 1: At the beginning of the deployment process, write a maintenance file into document root and conditionally check an existence of the maintenance file in nginx server config: server { if (-f $document_root/in_maintenance_mode) { return 503; } } This method contains certain overhead as the file existence is checked for each request. Is it possible to check the file existence only when loading the nginx config? Option 2: Deployment script replaces the whole nginx server configuration file with a maintenance version and swaps it back in the end of the deployment. If this method is used, I am concerned about possible other automation processes like puppet that may be override the maintenance configuration file.

    Read the article

  • Directory directive: AuthType None but still need an AuthProvider?

    - by Steffen Winkler
    For now I just need the server to let me download files from one specific folder (in my case I chose /opt/myFolder for that task) Distribution is Debian 6.0 *edit_start* Apache version is 2.4, according to their official documentation, the Order/Allow clauses are deprecated and should not be used anymore I'm an idiot: Apache version is 2.2. *edit_end* My directory directives in apache2.conf look like this: <IfModule dir_module> DirectoryIndex index.html index.htm index.php </IfModule> ServerRoot "/etc/apache2" DocumentRoot "/opt/myFolder" <Directory /> Options FollowSymLinks AuthType None AllowOverride None Require all denie </Directory> <Directory "/opt/myFolder/*"> Options FollowSymLinks MultiViews AllowOverride None AuthType None Require all allow </Directory> When I try to access a file inside that folder (http://myserver.de/aTestFile.zip) I get an Internal Server Error. Also Apache writes the following error into it's log: configuration error: couldn't check user. Check your authn provider!: /aTestFile.zip Why would I need an authn provider if I don't want any authentication? Also I hope someone can explain to me what kind of AuthenticationProvider I'd need for that. Everytime I search for those things I get pointed at people asking how to protect files/directories with passwords or restrict access to some IP addresses, which doesn't really help me. ok, since I've Apache version 2.2, here is the error I get when using the Order/Deny/Allow commands instead of AuthType/Require: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration.

    Read the article

  • Error while installing boost_1_54

    - by Farhat
    On trying to install boost I get this error during configuration checks. Googling did not give any pointers. [root@heracles boost_1_54_0]# ./b2 install Performing configuration checks - 32-bit : no (cached) - 64-bit : yes (cached) - arm : no (cached) - mips1 : no (cached) - power : no (cached) - sparc : no (cached) - x86 : yes (cached) error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - has_icu builds : no (cached) warning: Graph library does not contain MPI-based parallel components. note: to enable them, add "using mpi ;" to your user-config.jam - zlib : yes (cached) - iconv (libc) : yes (cached) - icu : no (cached) - icu (lib64) : no (cached) - compiler-supports-ssse3 : yes (cached) - compiler-supports-avx2 : no (cached) - gcc visibility : yes (cached) - long double support : yes (cached) warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. error: No best alternative for libs/coroutine/build/allocator_sources next alternative: required properties: <link>static <target-os>windows <threading>multi not matched next alternative: required properties: <link>static <segmented-stacks>on <threading>multi not matched next alternative: required properties: <link>static <threading>multi not matched - zlib : yes (cached) How can the alternative for allocator sources be located? Thanks.

    Read the article

  • Can I use a single SSLCertificateFile for all my VirtualHosts instead of creating one of it for each VirtualHost?

    - by user65567
    I have many Apache VirtualHosts for each of which I use a dedicated SSLCertificateFile. This is an configuration example of a VirtualHost: <VirtualHost *:443> ServerName subdomain.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/users/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/users/publ`enter code here`ic"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl/server.key SSLCertificateChainFile /private/etc/apache2/ssl/ca.crt </VirtualHost> Since I am maintaining more Ruby on Rails applications using Passenger Preference Pane, this is a part of the apache2 httpd.conf file: <IfModule passenger_module> NameVirtualHost *:80 <VirtualHost *:80> ServerName _default_ </VirtualHost> Include /private/etc/apache2/passenger_pane_vhosts/*.conf </IfModule> Can I use a single SSLCertificateFile for all my VirtualHosts (I have heard of wildcards) instead of creating one of it for each VirtualHost? If so, how can I change the files listed above?

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >