Search Results

Search found 11197 results on 448 pages for 'handle leak'.

Page 323/448 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • AjaxControlToolkit JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Best option for storage clustering

    - by sam
    I'm working on an application that requires a large amount of storage space and I want to handle storage 'in-house' (Much cheaper than, say, S3) so we will have multiple servers (Initially 4) with large amounts of storage (6TB each). The storage will need to be very flexible and configurable, each piece of data should be replicated on at least 2 servers and must be easily readable/writable from ether an API of a UNIX device/file/folder like a normal drive, I don't mind which. We must also be able to easily offload content to our HTTP CDN (Edgecast), it doesn't need to have built in HTTP support but if it doesn't I'm going to have to write something to get the files onto HTTP so they can be pulled by the CDN. I've looked at a lot of solutions including Eucalyptus Walrus OpenStack Object Storage MogileFS and some others which I can't remember All the servers will be running RHEL 6, they have 4x1.5TB drives which will be RAID1'd into a single partition. All the servers have 1GB/s connections between them and 100MB/s connections to the internet with unlimited bandwidth. They have 2x2.66ghz processors. I understand there isn't a single, perfect answer but it would be nice to get some pointers.

    Read the article

  • ghettoVCB issue

    - by romgo75
    Hi, I setup ghettoVCB script in order to backup 3 VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folder, one for each VM. For each Folder I have th following files : -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finished the process. When I read the logs concerned by this folder I get : 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run anymore ghetto VCB because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle vm abckup rotate ? Any idea ? Thanks !

    Read the article

  • Error loading operating system: format Windows 7 to Windows XP Service Pack 3

    - by Blerta
    I saw that there are other questions like mine here. But O also saw that some problems where solved with fixmbr from a Windows 7 recovery console, but that didn't work for me. I bought my laptop with Vista installed and later reformated and used Windows 7. During formatting with Windows 7 I had some problems with my hard drive and found out it was dead so I bought a new one. I wanted to reformat with Windows XP,because Windows 7 is consuming more RAM that it is able handle and I wanted to use it for other programs. So I formatted with Windows XP Service Pack 3 but after first reboot a message appeared: "Error loading operating system" Reading here, I assumed that maybe I had installed it on the wrong partition and maybe having two OS now, so I used fixmbr but it is still not starting up. Anyway I am sure that is not the case of two operating systems. Is there any chance that when the computer designed to work with Vista you would face problems with Windows XP? Like not recognizing a hard drive?

    Read the article

  • Extract large zip file (50 GB) on Mac OS X

    - by chingjun
    I was trying to move the files to another hard drive. So I archived all my photos in one large ZIP file using the Mac OS X built-in compress function. But the file failed to extract. I've tried many programs, but none of the programs I tried were able to extract the file. I've tried Mac OS X's extract utility, StuffIt Expander, 7-Zip (command line), all failed. Mac's archive utility and StuffIt don't seem to support large files, and 7-Zip's command line version gave an error stating unsupported archive. I have no luck in Windows either as many of my files have Chinese filenames, and couldn't extract to the correct name under Windows. Are there some programs that can support large files, can handle files compressed using Mac OS X's compress function, and can support UTF-8 filename? With or without GUI is fine. Update Well, I had made the wrong decision to compress the files, and it's already too late. I thought I should be able to extract the file if I could compress it. It's too late, the original copies are gone, only a large ZIP file left here. I have tried using 'unzip', but it says End-of-central-directory signature not found. I guess it doesn't have large file support as well. I would try the Windows Vista method as stated by SuperMagic, but I need to borrow a computer for that. Anyway, thank you everyone, but please provide more suggestions on what software that could possibly extract that file.

    Read the article

  • Using two ports on my ZyXel USG gateway to patch two devices together?

    - by Matthew Beza
    I don't know if this is possible but it would save me many long drives! I have drawn, with my epic MS Paint skills, my current setup. I have a ZyXel USG300 Gateway with a built in 5 port switch. It supports bridging, tunnles, VLANs, etc.. I have a Cisco WLC2112 plugged into port 6 (P6). The cisco is set to 192.168.6.2 and P6 is set to 192.168.6.1. This works but I need to incorporate a Nomadix AG3000 to handle guests. (Router (A) in the picture). So I need the Cisco WLC2112 to use the Nomadix as if it where plugged into it's LAN port. Right now the Nomadix WAN Port is plugged into (P2) on the ZyXel, and the Nomadix LAN port is plugged into (P3). Is it possible to set something up where (P3) and (P6) are somehow "Patched" as if the Cisco was plugged directly into the Nomadix LAN port? Basically making the ZyXel a fancy Cat5e coupler?

    Read the article

  • Cannot print certain colours on Ubuntu with HP Laser Printer

    - by ILMV
    We have a load of machines running Ubuntu in our office, they are either on 8.04 or 9.10. We have a server which connects a HP JetDirect that connects to a HP 3550 Colour Laser printer using CUPS. The problem we are having is we cannot print red, magenta or yellow at 100%, I've got a picture of the Ubuntu test page to demonstrate my problem: This is obviously a pretty big problem as we are constantly receiving documents with these colours and cannot successfully print them off, we cannot just switch the grayscale, our business depends on being able to print colour (seems trivial but we handle lots of artwork). We're using the recommended driver HP Color LaserJet 3550 footmatic/pxljr (recommended), there is another driver in the list labelled HP Color LaserJet 3550 footmatic/hpijs. These are production printers so need to make sure any setting change won't kick is in the nuts. It would appear HPIJS is for HP Inkjets, makes sense I guess. The problem doesn't occur in Windows. RESOLVED I've managed to solve the problem, I did indeed use the HPIJS driver (apparently for inkjets) but it seems to have worked, we're going to roll with it for now to see how we get on with it.

    Read the article

  • Async ignored on AJAX requests on Nginx server

    - by eComEvo
    Despite sending an async request to the server over AJAX, the server will not respond until the previous unrelated request has finished. The following code is only broken in this way on Nginx, but runs perfectly on Apache. This call will start a background process and it waits for it to complete so it can display the final result. $.ajax({ type: 'GET', async: true, url: $(this).data('route'), data: $('input[name=data]').val(), dataType: 'json', success: function (data) { /* do stuff */} error: function (data) { /* handle errors */} }); The below is called after the above, which on Apache requires 100ms to execute and repeats itself, showing progress for data being written in the background: checkStatusInterval = setInterval(function () { $.ajax({ type: 'GET', async: false, cache: false, url: '/process-status?process=' + currentElement.attr('id'), dataType: 'json', success: function (data) { /* update progress bar and status message */ } }); }, 1000); Unfortunately, when this script is run from nginx, the above progress request never even finishes a single request until the first AJAX request that sent the data is done. If I change the async to TRUE in the above, it executes one every interval, but none of them complete until that very first AJAX request finishes. Here is the main nginx conf file: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server_names_hash_bucket_size 64; # configure temporary paths # nginx is started with param -p, setting nginx path to serverpack installdir fastcgi_temp_path temp/fastcgi; uwsgi_temp_path temp/uwsgi; scgi_temp_path temp/scgi; client_body_temp_path temp/client-body 1 2; proxy_temp_path temp/proxy; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; # Sendfile copies data between one FD and other from within the kernel. # More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; # don't buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on; types_hash_max_size 2048; # Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 90; # Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_header_timeout 20; client_body_timeout 60; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 60; # Size Limits client_body_buffer_size 64k; client_header_buffer_size 4k; client_max_body_size 8M; # FastCGI fastcgi_connect_timeout 60; fastcgi_send_timeout 120; fastcgi_read_timeout 300; # default: 60 secs; when step debugging with XDEBUG, you need to increase this value fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; # Caches information about open FDs, freqently accessed files. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Turn on gzip output compression to save bandwidth. # http://wiki.nginx.org/HttpGzipModule gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_http_version 1.1; gzip_vary on; gzip_proxied any; #gzip_proxied expired no-cache no-store private auth; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # show all files and folders autoindex on; server { # access from localhost only listen 127.0.0.1:80; server_name localhost; root www; # the following default "catch-all" configuration, allows access to the server from outside. # please ensure your firewall allows access to tcp/port 80. check your "skype" config. # listen 80; # server_name _; log_not_found off; charset utf-8; access_log logs/access.log main; # handle files in the root path /www location / { index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 # location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # add expire headers location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt|js|css)$ { expires 30d; } # deny access to .htaccess files (if Apache's document root concurs with nginx's one) # deny access to git & svn repositories location ~ /(\.ht|\.git|\.svn) { deny all; } } # include config files of "enabled" domains include domains-enabled/*.conf; } Here is the enabled domain conf file: access_log off; access_log C:/server/www/test.dev/logs/access.log; error_log C:/server/www/test.dev/logs/error.log; # HTTP Server server { listen 127.0.0.1:80; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; } # HTTPS server server { listen 443 ssl; server_name test.dev; root C:/server/www/test.dev/public; index index.php; rewrite_log on; default_type application/octet-stream; #include /etc/nginx/mime.types; # Include common configurations. include domains-common/location.conf; include domains-common/ssl.conf; } Contents of ssl.conf: # OpenSSL for HTTPS connections. ssl on; ssl_certificate C:/server/bin/openssl/certs/cert.pem; ssl_certificate_key C:/server/bin/openssl/certs/cert.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri =404; fastcgi_param HTTPS on; fastcgi_pass 127.0.0.1:9100; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Contents of location.conf: # Remove trailing slash to please Laravel routing system. if (!-d $request_filename) { rewrite ^/(.+)/$ /$1 permanent; } location / { try_files $uri $uri/ /index.php?$query_string; } # We don't need .ht files with nginx. location ~ /(\.ht|\.git|\.svn) { deny all; } # Added cache headers for images. location ~* \.(png|jpg|jpeg|gif)$ { expires 30d; log_not_found off; } # Only 3 hours on CSS/JS to allow me to roll out fixes during early weeks. location ~* \.(js|css)$ { expires 3h; log_not_found off; } # Add expire headers. location ~* ^.+.(gif|ico|jpg|jpeg|png|flv|swf|pdf|mp3|mp4|xml|txt)$ { expires 30d; } # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9100 location ~ \.php$ { try_files $uri /index.php =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9100; } Any ideas where this is going wrong?

    Read the article

  • Quick, Linux-compatible unit-aware calculator

    - by endolith
    I want to be able to press a keyboard combination, start typing a mathematical expression that includes units and slightly advanced math (not just a four-function calculator), and get a result immediately, in units that I specify, that I can copy and paste. Currently I open Firefox and press Ctrl+K, type in the search box, and it usually gives me a result in the drop-down from Google Calculator. It doesn't always, though, so I press "=" at the end, wait for a result, remove the equals, wait for a result, realize it doesn't understand the way I typed a unit, open the result in a new tab, etc. it sucks. Wolfram Alpha is smarter, but very much slower, and the output is all images, not text, and I don't have a quick widget for it, if such a thing could even exist. GNU units has a ton of units, which is great, and I can define my own units, which is great, but they have to be written in specific, unintuitive ways, it doesn't handle much advanced math, and I'd need to open a terminal, start units, etc. I hate the command line. I wasted a lot of time trying to make front-ends for units in Deskbar and Launchy, but I'm not a real coder and I don't use either of those anymore. Any other solutions or enhancements of these?

    Read the article

  • DHCP Server on local machine

    - by EralpB
    Hello I am trying to setup a dhcp3-server on Ubuntu. But my question is more generic, if dhcp server is in a blockbox and all clients are connected to it I think I get what is going on but when dhcp server is installed on one of the "clients" that confuses me. When I send a dhcp packet from that client to the dhcp server, will my ethernet card read and write at the same time? Or will it handle it internally without writing any data to ethernet cable. It's the first time I am encountering these network things so I am a little bit confused. Also I wonder If I am in a big network with lan IP let's say 192.168.0.100 and I install a dhcp server to my computer, can any other computers accidentally get IP from my dhcp server? Every computer has one ethernet card (if that matters?). And every computer is connected to one router. I guess the answer is no because the broadcast message won't reach to my computer since when router receives a dhcp search packet it will answer and it won't let other computers know about it because they don't need to. And without router sending that packet one by one, it cannot travel further. I'd be glad if someone enlightens me. Thank you very much.

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Mac OS X printing to CUPS - More intuitive authentication failure?

    - by Moduspwnens
    We have a network-wide CUPS server that offers authenticated printer access to all our campus users. We've been pretty disappointed with the way Mac clients handle bad printing authentication, though. In any other authentication dialog, when a user types in a bad username or password, the window shakes briefly, allowing the user to re-enter. With printers, this isn't the case. It'll happily accept (and even save to the keychain, if specified) bad credentials. The authentication dialog is dismissed, and the user then has to deal with the print jobs showing up as "On hold (authentication required)". To get their job printed, they need to select it in the printer's queue, click "Resume", then re-enter appropriate credentials. Is there a way to get failed printing authentication to work more intuitively for Mac OS X clients? We're trying to support a BYOD environment, but our end users have been really confused by this. It's made even worse by the way it pre-populates the user's full login name (e.g. "Smith, John"), which tends to make them think to use their local machine passwords.

    Read the article

  • Obey server_name in Nginx

    - by pascal
    I want nginx/0.7.6 (on debian, i.e. with config files in /etc/nginx/sites-enabled/) to serve a site on exactly one subdomain (indicated by the Host header) and nothing on all others. But it staunchly ignores my server_name settings?! In sites-enabled/sub.domain: server { listen 80; server_name sub.domain; location / { … } } Adding a sites-enabled/00-default with server { listen 80; return 444; } Does nothing (I guess it just matches requests with no Host?) server { listen 80; server_name *.domain; return 444; } Does prevent Host: domain requests from giving results for Host: sub.domain, but still treats Host: arbitrary as Host: sub-domain. The, to my eyes, obvious solution isn't accepted: server { listen 80; server_name *; return 444; } Neither is server { listen 80 default_server; return 444; } Since order seems to be important: renaming 00-default to zz-default, which, if sorted, places it last, doesn't change anything. But debian's main config just includes *, so I guess they could be included in some arbitrary file-system defined order? This returns no content when Host: is not sub.domain as expected, but still returns the content when Host is completely missing. I thought the first block should handle exactly that case!? Is it because it's the first block? server { listen 80; return 444; } server { listen 80; server_name ~^.*$; return 444; }

    Read the article

  • nginx reverse ssl proxy with multiple subdomains

    - by BrianM
    I'm trying to locate a high level configuration example for my current situation. We have a wildcard SSL certificate for multiple subdomains which are on several internal IIS servers. site1.example.com (X.X.X.194) -> IISServer01:8081 site2.example.com (X.X.X.194) -> IISServer01:8082 site3.example.com (X.X.X.194) -> IISServer02:8083 I am looking to handle the incoming SSL traffic through one server entry and then pass on the specific domain to the internal IIS application. It seems I have 2 options: Code a location section for each subdomain (seems messy from the examples I have found) Forward the unencrypted traffic back to the same nginx server configured with different server entries for each subdomain hostname. (At least this appears to be an option). My ultimate goal is to consolidate much of our SSL traffic to go through nginx so we can use HAProxy to load balance servers. Will approach #2 work within nginx if I properly setup the proxy_set_header entries? I envision something along the lines of this within my final config file (using approach #2): server { listen Y.Y.Y.174:443; #Internally routed IP address server_name *.example.com; proxy_pass http://Y.Y.Y.174:8081; } server { listen Y.Y.Y.174:8081; server_name site1.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8081; } server { listen Y.Y.Y.174:8081; server_name site2.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer01:8082; } server { listen Y.Y.Y.174:8081; server_name site3.example.com; -- NORMAL CONFIG ENTRIES -- proxy_pass http://IISServer02:8083; } This seems like a way, but I'm not sure if it's the best way. Am I missing a simpler approach to this?

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

  • How can I set Vim to obey accents of my spoken language?

    - by naxa
    When pressing w or e in sentences with accents (written in my native language), such as the first one (marked **) here: **Éj-mélybol fölzengo** - csing-ling-ling - száncsengo. Száncsengo - csing-ling-ling - tél csendjén halkan ring. [1] the characters o, ö, among others [2], make my gVim think they are word-ends so it stops on them (in Normal mode). gVim stops on the positions marked with _ where it shouldn't: Éj-mélyb_ol f_ölzeng_o. I would like to set gVim so it properly handle words even when containing accents and other local characters. But where do I set this? I use it on Win32, vim v 7.3.46. [1] - excerpt of a poem by Weöres Sándor [2] - "others", not mentioned here :) like í, u are also a problem. On the other hand, gVim seems to already work with é and á. gVim version info: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 27 2010 17:59:02) Included patches: 1-46 Compiled by Bram@KIBAALE Big version with GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse +mouseshape +multi_byte_ime/dyn +multi_lang -mzscheme +netbeans_intg +ole -osfiletype +path_extra +perl/dyn +persistent_undo -postscript +printer -profile +python/dyn +python3/dyn +quickfix +reltime +rightleft +ruby/dyn +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white +tcl/dyn -tgetent -termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save +xpm_w32

    Read the article

  • PDF to HTML - batch converter - most reliable and accurate free AND paid for software?

    - by therobyouknow
    I'm look for either a free or paid-for (about 50$/40pounds) BATCH PDF to HTML converter to convert several PDF files at once. Needs to be able to handle vectored and bitmap images within the file, outputting both as jpegs referenced by the html pages. I've tried iorigsoft paid-for PDF to HTML - problems it seems to hang or just go idle, and the stuff it actually converts have broken links - the wrong name is used for constituent chapters as html. Also tried application from intrapdf.com but this crashes near the beginning of the conversion, consitently. Update: intrapdf works on my Windows XP machine but not on my Windows 7 machine. The only glitch is with the framed index contents html - the graphics in the page do not display in the page in the frame - but if you open the frame only in a new tab then you can see them. That might be a browser glitch in chrome only. This solution is good enough for me - given that I've already spent the money (I had spent before I asked) but I can't accept my own answer as this does not work on Windows 7. Looked at opensource tools but they look equally flakey or use old PDF versions. Need it on Windows 7 32bit home. Thoughts?

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • howto configure proxy.conf for mod_proxy, apache2, jetty

    - by Kaustubh P
    Hello, This is how I have setup my environment, atm. An apache2 instance on port 80. Jetty instance on the same server, on port 8090. Use-Case: When I visit foo.com, I should see the webapp, which is hosted on jetty, port 8090. If I put foo.com/blog, I should see the wordpress blog, which is hosted on apache. (I read howtos on the web, and installed it using AMP.) Below are my various configuration files: /etc/apache2/mods-enabled/proxy.conf: ProxyPass / http://foo.com:8090/ << this is the jetty server ProxyPass /blog http://foo.com/blog ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyStatus On /etc/apache2/httpd.conf: LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_balancer_module /usr/lib/apache2/modules/mod_proxy_balancer.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_ajp_module /usr/lib/apache2/modules/mod_proxy_ajp.so I have not created any other files, in sites-available or sites-enabled. Current situation: If I goto foo.com, I see the webapp. If I goto foo.com/blog, I see a HTTP ERROR 404 Problem accessing /errors/404.html. Reason: NOT_FOUND powered by jetty:// If I comment out the first ProxyPass line, then on foo.com, I only see the homepage, without CSS applied, ie, only text.. .. and going to foo.com/blog gives me a this error: The proxy server received an invalid response from an upstream server. The proxy server could not handle the request GET /blog. Reason: Error reading from remote server I also cannot access /phpmyadmin, giving the same 404 NOT_FOUND error as above. I am running Debian squeeze on an Amazon EC2 Instance. Question: Where am I going wrong? What changes should I make in the proxy.conf (or another conf files) to be able to visit the blog?

    Read the article

  • Windows 7 doesn't boot when second hard disk is connected

    - by kenshin9786
    I'm sorry for my bad english on beforehand. I have two hard disks, one SATA and another IDE. I have windows XP and 7 on the SATA one, and Ubuntu on the IDE. Both of them boots and works, bios recognizes them, just works. After I installed Windows 7, and connected the IDE drive, it freezes on "Starting Windows" (the black screen with the Windows logo). I unplug the IDE drive, and it starts normally. Windows XP starts normally on both situations (with or without the IDE one connected), same for Ubuntu (it works with both disks connected or just the IDE where it is). The IDE drive is on good status according to SMART. The IDE is first on boot order. It goes to the Ubuntu's grub first, then by default it goes to the Windows 7 bootloader, and then to XP. I think the problem is not about the bootloader or grub. I just read that it can be solved formatting the "problematic" hard disk because Windows 7 cannot handle so many active partitions or something like that. But that's not an option for me, I don't want to lose my Ubuntu nor have it unbootable. How can I solve this without this consecuences I mentioned? Any help would be appreciated.

    Read the article

  • JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Server configurations for hosting MySQL database

    - by shyam
    I have a web application which uses a MySQL database hosted on a virtual server. I've been using this server when I started the application and when the database was really small. Now it has grown and the server is not able to handle the db, causing frequent db errors. I'm planning to get a server and I need suggestions for that. Like I said, the db is now 9 GB, and is growing considerably fast. There are a number of tables with millions of rows, which are frequently updated and queried. The most frequent error the db shows is Lock wait timeout exceeded. Previously there used to be "The total number of locks exceeds the lock table size" errors too, but I could avoid it by increasing Innodb buffer pool size. Please suggest what configurations should I look for in the server I should buy. I read somewhere that the db should ideally have a buffer pool size greater than the size of its data, so in my case I guess I'd need memory gt 9 GB. What other things should I look for in the server? Just tell me if I should give you more info about the

    Read the article

  • Ho can I recover from SharePoint configuration errors after promoting the server to a Domain Controller?

    - by jjr2527
    I have a SharePoint 2010 VM setup in VirtualBox and I was using local machine accounts to handle security on the server. While preparing for a demo it came time to have some meaningful users on my VM image. I followed some docs on promoting my server to a Domain Controller in a new forrest. So now I have [MachineName].SPDEMO.CONTOSO.com and I can add users as needed. However, when I try to connect to my SharePoint sites I am getting a white screen with the error: "Cannot connect to the configuration database" I changed the pool identity account of each of my IIS app pools to the new Administrator account and started the services successfully but I can't get the SQL services to start up. When I try to start them I get the following error: Windows could not start the SQL Server (MSSQLSERVER) on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 17058. In the event log I see the following error: The SQL Server (MSSQLSERVER) service terminated with service-specific error %%17058. Can I recover from this or should I roll back or just uninstall the Domain Controller role. I'd like to keep the server as a standalone DC so I can do some user profile creation/management but I need the SharePoint bits to work as well.

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >