Search Results

Search found 17267 results on 691 pages for 'dynamic ip'.

Page 547/691 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Drobo FS vs Lime Technology unRAID vs FreeNAS

    - by elluca
    I already decided to by a drobo fs until I just found these two tests: http://www.digitalversus.com/data-robotics-drobo-fs-p889_9543_487.html http://www.digitalversus.com/lime-technology-unraid-p889_8992_473.html The two cons agains drobo for me: loudness price What disadvantages has the unraid stuff against the drobo fs? Has it also got that ease of use like swapping drives on the go, simply extend capacity by plugging in new drives, notify me of drive errors, disk failure protection, dynamic space of "partitions", better/worse effective capacity, etc. Which is more secure? Am I able to simply replace a bad drive with a new one on unraid? What happens if my pc fails? Lets say the cpu overheats. Since I have a complete pc which is going to be replaced, I only have to pay the software to use unraid. I am going to use my nas for: music library (how well does it integrate with iTunes? ) picture library movie library development (i need to be able to be to use time machine) I am going to use this nas with a MacBook pro. My current disks: 2x 500Gb 1x 1.5Tb 1x 2Tb On a drobo fs I would have 2.26 Tb of space. What would it be on unraid? Is FreeNAS also an alternative?

    Read the article

  • Explorer.EXE ordinal 423 not found in urlmon.dll after updates/IE8 install

    - by Zoot
    Setting up a brand new Dell Optiplex 980 with Windows XP SP3, and everything started up fine on the first boot. My first task was to install system updates, including IE8 and WGA. After the required reboot after installing updates, I now get this error message: Explorer.EXE Ordinal not found. The ordinal 423 could not be located in the dynamic link library urlmon.dll Per my cursory Google search, this forum thread places the blame squarely on IE8. The solution provided is to enter safe mode and remove IE8. Unfortunately, when I press F8 to choose to boot safe mode, I only have the option of "Windows XP SP3 Professional" and no safe mode options. Any other ideas? Thanks in advance. FYI, I can get to the Windows Task Manager by holding down Control-Alt-Delete, but programs don't seem to run properly if you select them. I tried chatting with Dell Support, and we tried to initiate the system restore at c:\windows\system32\restore\rstrui.exe, but that had a similar "ordinal 423 not found in urlmon.dll" error.

    Read the article

  • Cisco IOS policy route for router originated VPN traffic

    - by Paul
    We have a Cisco IOS router with two DSL connections. One of them is intended for general traffic (ADSL), the other for VPN links (BDSL) and various other traffic. So the default route is the ADSL link, and we have a combination of static routes for the VPN traffic, and policy routes for other traffic types that should go out the BDSL link. For site to site traffic, this is fine, we just static route the public IPs and remote networks out of the BDSL line. The policy based routing works fine for any internal traffic that matches an ACL. The problem is now that there are remote VPN sites originating from dynamic addresses, so we cannot use static routes. The replies to incoming ISAKMP requests are following the default route out of the ADSL (despite there being no crypto map on that interface). I want to route the outgoing VPN traffic out of the BDSL. I have tried adding udp/500 and esp to and from the route-map acl that pushes traffic out of the BDSL line, but it doesn't match, presumably because the route-map happen earlier than the IPSec stuff. Any ideas how I can do this? IOS ver: 12.4.13T.

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • Apache2 - rewrite a bunch of specified pathname URLs to one URL

    - by James Nine
    I need to rewrite a bunch of urls (about 100 or so) for SEO purposes, and there may be more being added in the future (probably another 50-100 later on). I need a flexible way of doing this and so far, the only way I can think of is to edit the .htaccess file using the rewrite engine. For example, I have a bunch of urls like this (please note that the query string is irrelevant, and dynamic; it could be anything. I was only using them purely as an example. I am only focusing on the pathname--the part between the hostname and query string, as marked in bold below): http://example.com/seo_term1?utm_source=google&utm_medium=cpc&utm_campaign=seo_term http://example.com/another_seo_term2?utm_source=facebook&utm_medium=cpc&utm_campaign=seo_term http://example.com/yet_another_seo_term3?utm_source=example_ad_network&utm_medium=cpc&utm_campaign=seo_term http://example.com/foobar_seo_term4 http://example.com/blah_seo_term5?test=1 etc... And they are all being rewritten to (for now): http://example.com/ What's the most efficient/effective way of doing this so that I may be able to add more terms in the future? One solution I came across is to do this (in the .htaccess file): RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ / [NC,QSA] However, the problem with this solution is that even invalid urls (such as http://example.com/blah) will be rewritten to http://example.com instead of giving a 404 code (which is what it is supposed to do anyway). I'm still trying to figure out how all this works, and the only way I can think of is to write 100 more RewriteCond statements (such as: RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR]) before the RewriteRule directive. For example: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} =/seo_term1 [NC,OR] RewriteCond %{REQUEST_URI} =/another_seo_term2 [NC,OR] RewriteCond %{REQUEST_URI} =/yet_another_seo_term3 [NC,OR] RewriteCond %{REQUEST_URI} =/foobar_seo_term4 [NC,OR] RewriteCond %{REQUEST_URI} =/blah_seo_term5 [NC] RewriteRule ^(.*)$ / [NC,QSA] But that doesn't sound very efficient to me. Is there a better way?

    Read the article

  • asterisk extensions.conf & sip.conf

    - by Josh
    I'm trying to get my Dialplan to work. When I call, the only thing I get is a dial tone to enter extension "no Background(thanks-calling) is played". When extension 123 is dialed, busy signal is triggered and asterisk CLI get frozen. Any help will be appreciate it. Conf files below. ; PSTN on sip.conf [pstn] type=friend host=dynamic context=pstn username=pstn secret=password nat=yes canreinvite=no dtmfmode=rfc2833 qualify=yes insecure=port,invite disallow=all allow=ulaw ; PSTN on extensions.conf [pstn] exten => s,1,Answer exten => s,2,Wait,2 exten => s,4,DigitTimeout,5 exten => s,5,ResponseTimeout,10 exten => s,6,Background(thanks-calling) exten => 0,1,Goto(incoming,123,1) ; (Member Services) [incoming] exten => 123,1,NoOP(${CALLERID}) ; show the caller ID info in the console exten => 123,n,Ringing() exten => 123,n,Answer() exten => 123,n,Playback(silence/1) exten => 123,n,Playback(connecting1) exten => 123,n,Wait(3) exten => 123,n,Dial(SIP/line1,60) exten => 123,n,Congestion

    Read the article

  • apache with php fastcgi keeps going down

    - by Josh Nankin
    I have an apache2 server configured with MPM worker and php fast cgi. Lately the apache logs have been telling me that MaxClients is being reached frequently, even though it's already pretty high. My server is now constantly going down, and I see a bunch of lines like this in the log: [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: comm with (dynamic) server "/var/local/fcgi/php-cgi-wrapper.fcgi" aborted: (first read) idle timeout (20 sec) [Sun Mar 06 04:25:40 2011] [error] [client 50.16.83.115] FastCGI: incomplete headers (0 bytes) received from server "/var/local/fcgi/php-cgi-wrapper.fcgi" I can see that my php-cgi processes are pretty large (about 70mb on average). Here's my apache configuration for MPM worker: KeepAlive ON KeepAliveTimeout 2 <IfModule mpm_worker_module> StartServers 5 MinSpareThreads 10 MaxSpareThreads 10 ThreadLimit 64 ThreadsPerChild 10 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> Heres my fastcgi apache configuration: <IfModule mod_fastcgi.c> # One shared PHP-managed fastcgi for all sites Alias /fcgi /var/local/fcgi # IMPORTANT: without this we get more than one instance # of our wrapper, which itself spawns 20 PHP processes, so # that would be Bad (tm) FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 <Directory /var/local/fcgi> # Use the + so we don't clobber other options that # may be needed. You might want FollowSymLinks here Options +ExecCGI </Directory> AddType application/x-httpd-php5 .php AddHandler fastcgi-script .fcgi Action application/x-httpd-php5 /fcgi/php-cgi-wrapper.fcgi </IfModule> Here's my fastcgi wrapper: #!/bin/sh PHPRC="/etc/php5/apache2" export PHPRC PHP_FCGI_CHILDREN=8 export PHP_FCGI_CHILDREN exec /usr/bin/php-cgi Any help would be very very much appreciated!

    Read the article

  • Need help with MS Access 07 & Reports

    - by Moe
    Hey there, I'm finding it difficult to get MS reporting working to what I'd like to show. What I'm trying to do is: a) In my database store a URL file (HTTP external file), that is a .jpeg. I'd like to use that URL to call the image on the report sheet. I have tried to use 'Control source' on the data panel, but with no success. Any way I can get Dynamic Images to show up on each database. Also, I have a couple of Relational Databases. One Defines Values: For Example: DefinePets('petID','Name of Pet') The other one links the Main DB with the 'DefinePets' database. Eg: connect('petID','mainID','extraFeild') I'd like my report to Go into the "connect" Table, where the the currently viewed Record Value = mainID, then find petID and return Name of Pet. There is a many to many link between definePets and the main Table. (Therefore connect is joining them up) Or is that too much to ask from a simple package like Access? Thanks.

    Read the article

  • Postfix a lot of relay acces denied errors in maillog

    - by tester3
    I'm on Centos 6.5 with Postfix/Dovecot and some virtual domains. Postfix works fine, but I've got a lot of messages like this "NOQUEUE: reject: RCPT from 1-160-127-12.dynamic.hinet.net[1.160.127.12]: 454 4.7.1 : Relay access denied; from= to= proto=SMTP" in my maillog. I've tried to close port 25 with iptables, when I do so - I got no such messages, but my mail system starts work incorrectly and can't receive mail from other hosts. Please help! My postconf -n: alias_database = $alias_maps alias_maps = hash:/etc/postfix/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man message_size_limit = 20971520 mydestination = localhost.$mydomain, localhost newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = * sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_cert_file = /etc/pki/tls/certs/example.com.crt smtp_tls_key_file = /etc/pki/tls/private/example.com.key smtp_tls_loglevel = 1 smtp_tls_session_cache_database = btree:/etc/postfix/smtp_tls_session_cache smtp_tls_session_cache_timeout = 3600s smtp_use_tls = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = example.com smtpd_sasl_path = /var/run/dovecot/auth-client smtpd_sasl_security_options = noanonymous smtpd_sasl_tls_security_options = $smtpd_sasl_security_options smtpd_sasl_type = dovecot smtpd_tls_cert_file = /etc/pki/tls/certs/example.com.crt smtpd_tls_key_file = /etc/pki/tls/private/example.com.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:/etc/postfix/smtpd_tls_session_cache smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes soft_bounce = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 virtual_alias_maps = hash:/etc/postfix/vmail_aliases virtual_gid_maps = static:2222 virtual_mailbox_base = /var/vmail virtual_mailbox_domains = hash:/etc/postfix/vmail_domains virtual_mailbox_maps = hash:/etc/postfix/vmail_mailbox virtual_minimum_uid = 2222 virtual_transport = virtual virtual_uid_maps = static:2222 Please help! Will attach master.cf or anything other if needed.

    Read the article

  • Appears to be "randomly" switching between the acl matched backend and the default backend

    - by Xoor
    I have HAProxy acting as a proxy in front of: An NGinx instance An in-house load balancer in front of multiple dynamic services exposed with socket.io (websockets) My problem is that from time to time my connections are proxied correctly to my socket.io cluster, and then randomly it fallsback to routing to NGinx which obviously is annoying and meaningless since NGinx isn't mean't to handle the request. This happens when requesting for URLs of the format : http://mydomain.com/backends/* There's an ACL in the HAProxy config to match the '/backends/*' path. Here's a simplified version of my HAProxy config (removed extra unrelated entries and changed names): global daemon maxconn 4096 user haproxy group haproxy nbproc 4 defaults mode http timeout server 86400000 timeout connect 5000 log global #this frontend interface receives the incoming http requests frontend http-in mode http #process all requests made on port 80 bind *:80 #set a large timeout for websockets timeout client 86400000 # Default Backend default_backend www_backend # Loadfire (socket cluster) acl is_loadfire_backends path_beg /backends use_backend loadfire_backend if is_loadfire_backends # NGinx backend backend www_backend server www_nginx localhost:12346 maxconn 1024 # Loadfire backend backend loadfire_backend option forwardfor # This sets X-Forwarded-For option httpclose server loadfire localhost:7101 maxconn 2048 It's really quite confusing for me why the behaviour appears to be "random", since being hard to reproduce it's hard to debug. I appreciate any insight on this.

    Read the article

  • IIS7 - how to place application in a folder inside application web site

    - by Nir
    I have a static web site with a blog (an asp.net application), the blog is in a subdirectory of the web site so: example.com/, example.com/Something.htm, example.com/folder/somefile.htm, etc. - are all static files example.com/blog, example.com/blog/categories.aspx, example.com/blog/2011/11/09/post-name.aspx, etc. - all go to the blog app I'm upgrading the static part of the web site to a dynamic site (also an asp.net application) and the blog is incompatible with the new app (the app needs handlers and modules loaded in web.config that don't work with the blog) Also, I have to keep all the old URLs the same - so I can't move the blog to a subdomain or the new app to a folder and the blog generates links based on its folder so clever redirection tricks wouldn't work. Is there a way to place an asp.net application in a folder inside another application (either as a real or virtual folder) so that the root web.config settings don't apply to the application folder? Or some other trick I didn't think of? The system is running IIS7 on Windows Server 2008 64bit, I have full control over the server's configuration. I can't modify the blog's source code but I can edit its web.config and other configuration. I can modify the source of the new application but I can't make it compatible with the blog (most of its usefulness comes from a 3rd party library that is not compatible with the blog). The blog in an asp.net 3.5 webforms application The new root application is an asp.net 4.0 mvc application

    Read the article

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • Can I format a Veritas cluster shared volume from windows?

    - by spaghettidba
    We have a Microsoft Failover Cluster with dynamic disks managed by Veritas Storage Foundation. Today the sysadmins added a new disk for SQL Server but the cluster size on the volume was wrong, so I issued a quick format to change it. The disk volume failed, the SQL Server group failed as well and the cluster became unresponsive. After some minutes I managed to fail over to a passive node. The SAN admins say it's my fault because I shouldn't have formatted the disk from the Windows format applet, but I should have used Veritas Enterprise Administrator instead. Can a format operation bring offline a whole cluster group this way? Relevant error messages: From the eventlog: The cluster resource host subsystem (RHS) stopped unexpectedly. An attempt will be made to restart it. This is usually due to a problem in a resource DLL. Please determine which resource DLL is causing the issue and report the problem to the resource vendor. From the cluster.log ERR [RCM] rcm::RcmResControl::DoResourceControl: ERROR_RESOURCE_CALL_TIMED_OUT(5910)' because of 'Control(STORAGE_GET_DISK_INFO_EX) to resource 'NameOfTheDiskGroup' timed out.' Veritas Documentation: Excerpt from Symantec's documentation: Note: Before manually creating the resource, you must format the cluster-shared volume with NTFS using the VEA GUI and mount it on the node where you are trying to create the resource. Does this mean the disk cannot be formatted from Windows? I don't read it that way. For the record, I formatted many disks using the Windows applet in the past and nothing bad happened.

    Read the article

  • Why my internal hard disk can be ejected?

    - by Bear Bear
    I have 6 hard disks in my computer and one DVD. The disk that I connect to the 6th SATA port is a regular disk. But my Windows 7 64 bit shows it as a removable drive. I can eject my hard disk! Just like a pen drive. If I connect another disk to the 6th port, I can eject that disk too. So it has nothing to do with the physical disk, it must be something related to Windows or BIOS I don't understand why Windows 7 is seeing a normal hard disk as removable. In Computer Management - Disk Management, the disk looks identical to the others - there is nothing there to suggest the drive is different from the others. But in the tray I have the icon to eject it. The motherboard model is ASUS F2A85 V PRO FM2. All the disks are formatted normally, no Dynamic Disk, no RAID, nothing special. How can I tell Windows 7 to treat the disk exactly like the others, so it can't be ejected?

    Read the article

  • Jenkins CI - Cannot allocate memory

    - by Programmieraffe
    I tested jenkins-ci successfully on a ubuntu 10.4 (with vmware fusion) on my local computer. Now I want to install and use it on my virtual server at hosteurope. The basic installation was no problem, but now I have problems with my build project. After pulling an mercurial update from a repository, ant is invoked and throws the following error in my build project: "Buildfile: /var/lib/jenkins/workspace/concrete5-seed-clean/build.xml [property] java.io.IOException: Cannot run program "/usr/bin/env": java.io.IOException: error=12, Cannot allocate memory" There is a known problem with heap size at virtual servers at hosteurope (http://faq.hosteurope.de/index.php?cpid=13918), so I tried to set the heap size manually: # for ant export ANT_OPTS="-Xms512m -Xmx512m" # jenkins # edited /etc/default/jenkins, added line JAVA_ARGS="-Xms512m -Xmx512m" # restarted jenkins via /etc/init.d/jenkins restart After setting this for ant, the command "ant -diagnostics" runs through and does not cause an error, but the error still occurs when I try to build the project. Server-Details: - http://www.hosteurope.de/produkt/Virtual-Server-Linux-L Ubuntu 10.4 LTS RAM: 1GB / Dynamic 2GB My questions: - Is 1GB enough for Jenkins or do I have to upgrade the server? - Is this error caused by ant or jenkins? Update: I got it running with ant options -Xmx128m -Xms128m, but sometimes the error occurs again. (this freaks me out, cause i can not reproduce it by now :/ ) Help much appreciated! Cheers, Matthias

    Read the article

  • How to get nginx to pass HTTP_AUTHORIZATION header to Apache

    - by codeinthehole
    Am using Nginx as a reverse proxy to an Apache server that uses HTTP Auth. For some reason, I can't get the HTTP_AUTHORIZATION header through to Apache, it seems to get filtered out by Nginx. Hence, no requests can authenticate. Note that the Basic auth is dynamic so I don't want to hard-code it in my nginx config. My nginx config is: server { listen 80; server_name example.co.uk ; access_log /var/log/nginx/access.cdk-dev.tangentlabs.co.uk.log; gzip on; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_read_timeout 120; location / { proxy_pass http://localhost:81/; } location ~* \.(jpg|png|gif|jpeg|js|css|mp3|wav|swf|mov|doc|xls|ppt|docx|pptx|xlsx|swf)$ { if (!-f $request_filename) { break; proxy_pass http://localhost:81; } root /var/www/example; } } Anyone know why this is happening? Update - turns out the problem was something I had overlooked in my original question: mod_wsgi. The site in question here is a Django site, and it turns out that Apache does get the auth variables passed through, however mod_wsgi filters them out. The resolution is to use: WSGIPassAuthorization On See http://www.arnebrodowski.de/blog/508-Django,-mod_wsgi-and-HTTP-Authentication.html for more details

    Read the article

  • asterisk extensions.conf & sip.conf

    - by Josh
    I'm trying to get my Dialplan to work. When I call, the only thing I get is a dial tone to enter extension "no Background(thanks-calling) is played". When extension 123 is dialed, busy signal is triggered and asterisk CLI get frozen. Any help will be appreciate it. Conf files below. ; PSTN on sip.conf [pstn] type=friend host=dynamic context=pstn username=pstn secret=password nat=yes canreinvite=no dtmfmode=rfc2833 qualify=yes insecure=port,invite disallow=all allow=ulaw ; PSTN on extensions.conf [pstn] exten => s,1,Answer exten => s,2,Wait,2 exten => s,4,DigitTimeout,5 exten => s,5,ResponseTimeout,10 exten => s,6,Background(thanks-calling) exten => 0,1,Goto(incoming,123,1) ; (Member Services) [incoming] exten => 123,1,NoOP(${CALLERID}) ; show the caller ID info in the console exten => 123,n,Ringing() exten => 123,n,Answer() exten => 123,n,Playback(silence/1) exten => 123,n,Playback(connecting1) exten => 123,n,Wait(3) exten => 123,n,Dial(SIP/line1,60) exten => 123,n,Congestion

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Apache Proxy and Rewrite, append directory to URL

    - by ionFish
    I have a backend server running on http://10.0.2.20/ from the local network, which serves similar to this: / (root) | |_user1/ | |_www/ | |_private/ | |_user2/ |_www/ |_private/ (etc.) Accessing http://10.0.2.20/user1/ of course, contains 'www' and 'private' folders and is proxied through a public server using Apache's Reverse Proxy. I'd like it so the following happens: http://public-proxy-server/user1/ actually shows the content from http://10.0.2.20/user1/www/ without indicating it in the URL. (/private/ would not be accessible via the public proxy server). The key here, is for it to be dynamic, so all requests to http://public-proxy-server/*/should show content from http://10.0.2.20/*/www/. Again, the proxy currently works fine; below is the config: (On the public server) <VirtualHost *:80> ServerName www.domain.com ProxyRequests Off ProxyPreserveHost On ProxyVia full ProxyPass / http://10.0.2.20/ ProxyPassReverse / http://10.0.2.20/ </VirtualHost> (On the backend server) <VirtualHost *:80> ... #this directory contains folders 'user1' and 'user2' DocumentRoot /var/www/ ... </VirtualHost>

    Read the article

  • apache performance timing out

    - by Mike
    Im running a webserver where I'm hosting about 6-7 websites. Most of these websites get their content from MySQL which is hosted on the same server. Traffic average per day is about 500-600 unique visitors, about 150K hits per week. But for some reason sometimes websites send a timeout, OR sometimes websites dont load all images. I know that I should perhaps separate static content from dynamic content, but for now I think that's not a possibility. I would appreciate any suggestions on how could I improve the performance of apache, so it doesn't keep timing out. Server is running on Sempron LE 1300; 2.3GHz,512K Cache 2GB RAM 10Mbps/1Mbps Services: MySQL, ProFTPD, Apache. Private + Shared = RAM used Program ---------------------------------------------------- 1.2 MiB + 54.0 KiB = 1.2 MiB proftpd 4.1 MiB + 23.0 KiB = 4.1 MiB munin-node 20.8 MiB + 120.5 KiB = 20.9 MiB mysqld 47.3 MiB + 9.9 MiB = 57.3 MiB apache2 (22) top: Mem: 2075356k total, 1826196k used, 249160k free, Timeout 35 KeepAlive On MaxKeepAliveRequests 300 KeepAliveTimeout 5 <IfModule mpm_prefork_module> StartServers 10 MinSpareServers 20 MaxSpareServers 20 MaxClients 60 MaxRequestsPerChild 1000 </IfModule> <IfModule mpm_worker_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • MD RAID 1 with external bitmap doesn't fully resync

    - by user64744
    I have an interesting configuration: dual boot system with a RAID 1 that needs to be visible in both Windows and Linux. The Windows install is Win 7 Enterprise, and the Linux install is Kubuntu 10.04. To get the RAID to work, I set it up using Windows's "Dynamic Disks" RAID 1, and brought it up in Linux using MD with no persistent superblock, and a write-intent bitmap on another partition. (Without this bitmap, MD had no way of knowing that the array was in sync, and would do a complete resync every time the array started.) The array is assembled like so: mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 I expected that the first time I ran this command, it would resync the array, write out a bitmap with no dirty chunks, and all would be good. This wasn't the case: after completing the resync, the bitmap was mostly clean, but about 5% dirty blocks remained, as revealed by mdadm -X /var/local/md1.bitmap I didn't mount the filesystem on /dev/md1 or touch it in any other way. I then found that stopping and restarting the array: mdadm --stop /dev/md1 mdadm --build /dev/md1 -l 1 -n 2 -b /var/local/md1.bitmap /dev/sdb2 /dev/sdc2 did indeed read in the bitmap, with an ensuing resync that went quickly because most of the blocks were marked clean. The confusing part is that this resync further reduced the number of dirty blocks, but still did not remove all of them. By repeatedly stopping and restarting I could slowly bring the dirty block count down to around 0.6%, where it seemed to level out. Any ideas what could be causing this? It smells to me of a race condition somewhere that leads to blocks either being skipped over during synchronization or not properly cleared from the bitmap, but I really have no evidence to prove this. It doesn't look like hardware issues since both drives are new and have zero read errors and reallocated sectors reported by smartctl -a.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Disable CTRL+mouse wheel zooming in Chrome?

    - by Peter Nore
    I'm a normal-sighted person and I would like to view pages at 100% all the time. I use keyboard shortcuts that involve CTRL a lot, so about twenty times a day I accidentally hit CTRL at the same time that I'm scrolling, which results in the page being reflowed and repainted. This in is annoying because it can take up to 30 seconds to fix the issue, depending on how complex the site layout is. On sites with dynamic layout such as Google Docs the problem is more serious; accidentally hitting CTRL+mouse wheel corrupts the display and forces me to refresh the page entirely, sometimes causing me to loose information in the process. I would like to either decouple CTRL+mouse wheel from zoom, or disable zoom functionality altogether. This is possible on Firefox by using about:config; is there a similar way to edit detailed settings in Chrome? Would I have access to the detailed settings if I used Chromium instead of Chrome? I'll probably jump ship back to Firefox if I can't solve this problem. There is a superuser question that asks basically the same thing I'm asking, but for Firefox and Internet Explorer exclusively. Other people on the Chrome forum have had related issues, but none have the same problem. "I would really like it if I could deactivate the auto zoom in/out." had "something with laptops and Windows 7", not the feature built into Chrome. Other people have had PDF specific issues, which doesn't concern me. I've also tried searching for extensions that allow you to disable the scroll; I had hoped that "Zoom Lock" would have the ability to lock the zoom at 100% and prevent CTRL+scroll wheel from distorting the display, but it doesn't work for my use case. Google Chrome version 9.0.597.84 (Official Build 72991) Operating System: Ubuntu 10.10

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >