Search Results

Search found 90811 results on 3633 pages for 'hyper v server 2012 r2'.

Page 1689/3633 | < Previous Page | 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696  | Next Page >

  • Do I need a ssl certificate if just pointing my domain to Cloudfront?

    - by hashpipe
    I have a website running on a domain (e.g site.com). I have an additional domain(e.g sitecdn.com) which basically points to Amazon Cloudfront for delivery. Amazon Cloudfront in turn basically fetches the data from the main domain (site.com). I use this setup primarily to have multiple subdomains of my sitecdn.com to point to assets via the cdn. The main website has a ssl certificate, and I intend to put all assets served from the cdn as https links only. Something like <img src="https://img.sitecdn.com/image.jpg" /> I'm a little confused whether I need a ssl for my cdn domain. In cloudfront I can set the setting to allow both https and http traffic. Do I need a ssl certificate for this ? If yes, then where do I install the ssl certificate, since I don't have a server for sitecdn.com.

    Read the article

  • How to host a scalable social networking app

    - by christopher-mccann
    I am in the middle of developing a social networking application for a very select user niche which could scale to a few million users. Right now I have always hosted applications on RackSpace Cloud and I have no issues with them at all - always been a really good service and never had any downtime. My question is though does anyone think that cloud computing is not the way to host scalable web apps? Or can anyone with experience of this recommend a better solution. I have always shunned trying to run big servers from my own facilities as I think it seems silly to go to the expense of bringing in big alternative power supplies and all the other necessary precautions when other companies already do this. I looked at managed hosting services but this proved to be a bit too expensive for us at the start and the scalability of it wasnt good enough - it would take a day or two to get a new server provisioned. Therefore I ended up on a cloud platform. If anyone has any recommendations or advice it would be greatly appreciated.

    Read the article

  • Using both domain users and local users for Squid authentication?

    - by Massimo
    I'm working on a Squid proxy which needs to authenticate users against an Active Directory domain; this works fine, Samba was correctly set up and Squid authenticates users via ntlm_auth. Relevant lines in squid.conf: auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 5 auth_param ntlm keep_alive on acl Authenticated proxy_auth REQUIRED http_access allow Authenticated http_access deny all Now, I need a way to allow access to users which don't have a domain account. I know I could create an "internet user" account in the domain, but this would allow access, although limited, to domain resources (file shares, etc.); I need something that will allow only Internet access. The ideal solution would be using a local account on the proxy server, either a Linux account or a Squid one; I know Squid supports this, but I'm unable to have it use both domain authentication and Squid/local authentication if domain auth is unsuccesful. Can this be done? How?

    Read the article

  • Setup LDAP In WAMP

    - by Cory Dee
    I'm having a really tough time getting the LDAP extensions to work in PHP on a WAMP server. Here is what I've done: Went to C:\Program Files\Apache Software Foundation\Apache2.2\modules and made sure that mod_ldap.so exists. I've gone into C:\Program Files\Apache Software Foundation\Apache2.2\conf\httpd.conf and made sure that this line is not commented out: LoadModule ldap_module modules/mod_ldap.so I've gone into C:\Program Files\PHP\php.ini and made sure this line is not commented out: extension=php_ldap.dll I've made sure C:\Program Files\PHP is in the Path I've made sure C:\Program Files\PHP contains libeay32.dll and ssleay32.dll Restart apache phpinfo() still doesn't show mod_ldap as being turned on. It shows util_ldap under Loaded Modules, but that's the only reference anywhere to LDAP. For a bit more background, I originally posted this on SO.

    Read the article

  • Importing .eml files into Exchange Discovery Folder

    - by Chad Gorshing
    I am needing to import over 18mm eml files into Exchange (this is for a client, so I'm restricted on what I can do - flexibility). They do not want these emails to go to the actual users email, so they do not want them to show up in the users inbox, deleted ..etc. They want to be able to search for these emails for litigation purposes ... hence the discovery folder. I have looked into the Pickup folder, which does not do what I want. I have also been writing some C# code to use the EWS (Exchange Web Service) Managed API, but so far I have not found anything to work for me. The Exchange server is 2010 SP1. I have looked through other questions/answers and they do not really match up with what I'm needing to accomplish. These are older emails that have already been removed from the users mailbox. So to turn around and put these emails into the Users inbox would (of course) be very bad.

    Read the article

  • Can Apache 2 be configured to start sending gzipped data early?

    - by rikh
    We have Apache set up to gzip compress html pages before they are sent to the client browser. However, some of our pages are slowish to generate and it seems that Apache is holding on until it has the complete page, compressing it, then sending it to the browser. There are big chunks of the page (the main important bits) that are actually generated and output fairly quickly. Is it possible to configure Apache to start compressing and send data for the page as soon as the script starts outputting something? Is it is, can you offer any help is how to do this? If not, can you suggest any other way to get gzip compression working for the server? The scripts that generate the pages are written in PHP. We are using Apache 2.0 on Linux.

    Read the article

  • Trying to script rsync using pam_exec

    - by Ricky-Rose
    I'm trying to write a bash script that will execute rsync when called by pam_exec. I've tried a couple different ways, and I'm not sure what I'm doing wrong. When I try to run the script at login by adding session optional pam_exec.so /usr/bin/local/sync.sh to my sshd file, it gives me an exit code of 12. if I log in and then manually run my script, it allows me to connect to the remote server, and it lists my files, but it doesn't actually sync anything. I have tried the code below using buth $USER and $PAM_USER. $PAM_USER doesn't work at all. #!/bin/sh rsync -azv -e ssh $USER@remote_server:/home/html/$USER/ /home/html/$USER

    Read the article

  • Which databases support parallel processing across multiple servers?

    - by David
    I need a database engine that can utilize multiple servers for processing a single SQL query in parallel. So far I know that this is possible with the some engines, though none of them are feasible for me either because of pricing or missing features. The engines currently known to me are: MS SQL (enterprise) DB2 (enterprise) Oracle (enterprise) GridSQL Greenplum Which other engines have this feature? Do you have any experience with using this feature? Edit: I have now proposed a method for creating one myself. Any input is welcome. Edit: I have found another one: Informix Extended Parallel Server

    Read the article

  • Oracle Hangs on Responses Intermittently

    - by Ryan Cook
    I want to preface this with the fact that I am a developer and I am not even close to a DBA, plus I am new to Oracle. OK, here it goes: I have a Java application which uses spring and hibernate. Its a simple CRUD app and I will leave the details out as I don't think they are the issue. I have noticed that my app runs fine when I use MySql, but when I use an Oracle 10.2 server every 7th-10th hangs for 5-10 seconds. My Oracle installation was done by me using all defaults, same as the mysql install. I don't even know where to start looking. Any ideas? Thanks in advance and sorry that I lack the details that are most likely required for help.

    Read the article

  • design a large scale network for an organization

    - by Essam
    i want to design a large scale network for an organization with HQ and two branches. i want to use a class A subnet. if i am using the network address 30.0.0.0 for the whole organization how can it be different from another organization company or whatever which is using the same address in another country? now i have the three locations for this organization,so i need 5 subnets [one for the HQ,two for branch A and branch B , one for connecting A to HQ and one for connecting branch B with HQ since i will use central DHCP server at the HQ,is that (number of subnetting) right? is it advisable to use class A or class B for this organization it term of address that will be wasted (let's say it is a university with two branches in two different states)?!

    Read the article

  • Nginx Rewrite to Previous Directory

    - by ThinkBohemian
    I am trying to move my blog from blog.example.com to example.com/blog to do this I would rather not move anything on disk, so instead i changed my nginx configuration file to the following: location /blog { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } root /home/demo/public_html/blog.example.com/current/public/; index index.php index.html index.html; passenger_enabled off; index index.html index.htm index.php; try_files $uri $uri/ @blog; } This works great but when i visit example.com/blog nginx looks for: /home/demo/public_html/blog.example.com/current/public/blog/index.php instead of /home/demo/public_html/blog.example.com/current/public/index.php Is there a way to put in a rewrite rule so that I can have the server automatically take out the /blog/ directory? something like ? location /blog { rewrite \\blog\D \; }

    Read the article

  • nginx www.domain.com vs domain.com virtualhost

    - by m33lky
    I have an http block where I include virtual hosts for the different domains hosted on the same server. For each virtual host I do: listen domain.com:80; Now, domain2.com works fine. However, when I do www.domain2.com it shows the page for domain1.com! How to properly configure nginx? Does this have something to do whether www is a CNAME or an A record? Update: It looks like you can do the following: listen 80; server_name domain.com www.domain.com;

    Read the article

  • Adding a second IP address for IIS - static vs dynamic A records

    - by serialhobbyist
    I'm looking to add a second IP address to IIS so that I can run two sites with different SSL certificates. When I added one on my play box and ran ipconfig /registerdns both addresses were registered in DNS with the server's name. So, I deleted the A record for the new IP address and rebooted. That also registered both names. So, then I went into the network config for the adapter and, on the DNS tab, unchecked "Register this connection's addresses in DNS". I deleted the A record for the new IP address again and re-ran ipconfig /registerdns. This time, it deleted the A record for the old IP address and didn't created one for the new address. Neither of these is what I want: I want the main IP address to be registered and refreshed automatically as a dynamic DNS record and the second IP address to be registered and managed as a static address. Is there any way to achieve this?

    Read the article

  • Status code in nginx try_files directive

    - by Hamish
    Is it possible to use the current status code as a parameter in try_files? For example, we try to provide a host specific 503 static response, or a server-wide fallback if it wasn't found: error_page 503 @error503; location @error503 { root /path_to_static_root/; try_files /$host/503.html /503.html =503; } There are a number of these directives, so it would be convenient to do something like: error_page 404 @error error_page 500 @error error_page 503 @error location @error { root /path_to_static_root/; try_files /$host/$status.html /$status.html =$status; } But the Variables documentation doesn't list anything that we could use to do this. Is it possible, or is there an alternative way to do this?

    Read the article

  • Known Hosts ECDSA Host Key Multiple Domains on One IP

    - by Jonah
    Hello, world!, I have a VPS set up with multiple domain names pointing to it. Arbitrarily, I like to access it via SSH through the domain name I'm dealing with. So for example, if I'm doing something with example1.com, I'll log in with ssh [email protected], and if I'm working with example2.com, I'll log in with ssh [email protected]. They both point to the same user on the same machine. However, because SSH keeps track of the server's fingerprint, it tells me that there is an offending host key, and makes me confirm access. $ ssh [email protected] Warning: the ECDSA host key for 'example2.com' differs from the key for the IP address '123.123.123.123' Offending key for IP in /home/me/.ssh/known_hosts:33 Matching host key in /home/me/.ssh/known_hosts:38 Are you sure you want to continue connecting (yes/no)? Is there a way to ignore this warning? Thanks!

    Read the article

  • Wordpress multisite and redirect

    - by Dr I
    I come to you because I'm facing a really strange effect on my hosting. I currently manage a server contening a NGINX/PHP-CGI and a wordpress multisite in it. My sites are created using subsite.domaine.tld, for now, my three subsites are correctly accessibles through the url: subsite.domain.tld. My goal is to allow my subsite on the host domain to be access through their respective unique domain. For exemple: www.domainA.com would redirect to subsite1.host.domain.tld. If I do that using the following setting on the domainA Public DNS: www 10800 IN CNAME subsite1.host.domain.tld. When I try to access www.domainA.com I don't go to subsite1.host.domain.tld but instead I'm redirect to the Wordpress ROOT site where I create my Network (host.domain.tld). Is there a trick to deal with?

    Read the article

  • Installing Bugzilla on Ubuntu 9.04 and Plesk

    - by makeflo
    Hey guys. I'm trying to install the latest Bugzilla version on my ubuntu server. (Want to use a subdomain like bugs.domain.com) I already installed all necessary perl modules and check_modules.pl doesn't show any errors. But when I'm running the testserver.pl script I get the following: TEST-OK Webserver is running under group id in $webservergroup TEST-FAILED Fetch of images/padlock.png failed I'm also not able to visit ANY file within the bugzilla folder from the browser. I'm always getting a 404 error. The bugzilla folder and all containing files are set to apache as the owner. I tried to enter the apache configuration form the installation guide in the http.include file of the domain and in the vhosts.conf file of the subdomain as well. I don't know what to do... Playing with plesks' suexecgroup doesn't bring any solution... I hope you can help me! Thanks in advance!

    Read the article

  • Apache mod_proxy parameters

    - by mike
    Hi! I have a machine running Apache with mod_proxy that I'm using to proxy a local Tomcat server running on another port. The problem is that Tomcat does not support wildcard sub-domains(whole reason for using apache/mod_proxy) and our app uses the subdomain to figure out what account the data should come from. So with that said, is there a way to pass the subdomain as a url parameter via mod_proxy? For example, I have this: ProxyPass / http://example.com:8080/ In a virtual host block and I can access the site from any subdomain. Would is be possible to do something like: ProxyPass / http://example.com:8080/?subdomain=the_sub_domain_requested Thanks for any and all help... Mike

    Read the article

  • keepalived questions (requirements, abilities, limitations)

    - by Poni
    1) What are keepalived's (physical/network) requirements? Does the two (or more) keepalived' nodes need to be connected to the same switch? (something related to broadcasting maybe). 2) Can keepalived nodes run on different networks, "internet" networks? 3) Is keepalived depend on the router? (as far as I understand, the virtual IP should point to the real router/switch that connects both nodes). 4) Is keepalived "service-independent"? - What is keepalived's involvement domain? IPs only? Or is it service/protocol oriented? - Does it deal ONLY with IP, or is it designed for HTTP for example? - In other words, can I use it for custom (network-based) app? 5) Have more than one failover server? If the answer for question #4 is "yes", i.e it depends on the service type, then is there any general alternative? Preferably easy to install/configure :)

    Read the article

  • nginx status code 200 and 304

    - by Chamnap
    I'm using nginx + passenger. I'm trying to understand the nginx response 200 and 304. What does this both means? Sometimes, it responses back in 304 and others only 200. Reading the YUI blog, it seems browser needs the header "Last-Modified" to verify with the server. I'm wondering why the browser need to verify the last modified date. Here is my nginx configuration: location / { root /var/www/placexpert/public; # <--- be sure to point to 'public'! passenger_enabled on; rack_env development; passenger_use_global_queue on; if ($request_filename ~* ^.+\.(jpg|jpeg|gif|png|ico|css|js|swf)$) { expires max; break; } } How would I add the header "Last-Modified" to the static files? Which value should I set?

    Read the article

  • How to perform diagnostics (stress test) on HP Smartarray Controller

    - by pepoluan
    At my office, we have a server that we suspect its RAID controller (HP Smartarray) is failing. A cold boot, however, does not indicate anything. Can anyone recommend me a method to stress-test the controller? Symptoms that makes me suspect a failing controller: Disk access getting slower, queue getting longer Running dmesg on the XenServer console I see many messages similar to this one: end_request: I/O error, dev tda, sector 253655584 (the sector number is never the same) When we move the VM to another physical host, we no longer see the above message Running idle (without any running VM), the dmesg no longer emit the above message A search on Google indicated that the above message is most commonly associated with a failing SmartArray controller. How can I be sure that the SmartArray controller is failing?

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Multi users windows login?

    - by DennyHalim.com
    I need a way to login multiple users into windows at startup, in (XP/Vista/7) I need for each time windows starts, all registered users will automatically login and starts all apps in their startup folder under their own credentials. Any ideas how I could achieve this? i need a 'cheap' alternative to windows vps. lots of people need vps to run certain apps and leave it running. it's less likely they need full admin access other than installing those apps. instead of each users have their own vps running simple apps, it might be cheaper to have one multi-user server to accomplish this?

    Read the article

  • Fine-tuning a LNMP stack

    - by Norman
    I'm in the process of setting up a server with 4GB RAM and 2 CPUs. The stack will be CentOS + NGINX + MySQL + PHP (with APC) and spawn-fcgi. It will be used to serve 10 Wordpress blogs, 3 of which receive about 20,000 hits per day. Each Wordpress instance is equipped with the W3 TotalCache. I have a few variables to play with: NGINX (How many worker_processes, worker_connections, etc) PHP (What parameters in php.ini should I change? What about apc?) Spawn-fcgi (Right now I have 6 php-cgi spawned. How many of them should I have?) I realize it's hard to tell without testing, but if you could please provide me with some ballpark numbers, that would be helpful too.

    Read the article

  • postfix/postdrop Issue with Solaris 10 (sparc) - permissions

    - by Zayne
    I am trying to get postfix (installed from blastwave) working on a Solaris 10 server, but only root is allowed to send mail. The problem appears to be permission related with postdrop. postdrop: warning: mail_queue_enter: create file maildrop/905318.27416: Permission denied I've checked that /var/opt/csw/spool/postfix/maildrop and /var/opt/csw/spool/postfix/public are both in the 'postdrop' group. main.cf contains setgid_group = postdrop. ppriv on postdrop as non-root user reports: postdrop[27336]: missing privilege "file_dac_write" (euid = 103, syscall = 5) needed at ufs_iaccess+0x110 I'm at a loss as to what to do next. I'm don't have much experience with Solaris; I use Linux daily. Any suggestions?

    Read the article

< Previous Page | 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696  | Next Page >