Search Results

Search found 41598 results on 1664 pages for 'segmentation fault'.

Page 342/1664 | < Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >

  • pasenger does not start puppet master under nginx

    - by Anadi Misra
    On the server [root@bangvmpllDA02 logs]# ruby -v ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux] [root@bangvmpllDA02 logs]# puppet --version 3.0.1 and [root@bangvmpllDA02 logs]# service nginx configtest nginx: the configuration file /apps/nginx/nginx.conf syntax is ok nginx: configuration file /apps/nginx/nginx.conf test is successful [root@bangvmpllDA02 logs]# service nginx status nginx (pid 25923 25921 25920 25917 25908) is running... [root@bangvmpllDA02 logs]# however none of my agents are able to connect to the master, they all fail with errors like so [amisr1@blramisr195602 ~]$ puppet agent --test --verbose --server bangvmpllda02.XXX.com Info: Creating a new SSL certificate request for blramisr195602.XXX.com Info: Certificate Request fingerprint (SHA256): 26:EB:08:1F:82:32:E4:03:7A:64:8E:30:A3:99:93:26:E6:66:B9:B0:49:B6:08:F9:67:CA:1B:0C:00:B9:1D:41 Error: Could not request certificate: Error 405 on SERVER: <html> <head><title>405 Not Allowed</title></head> <body bgcolor="white"> <center><h1>405 Not Allowed</h1></center> <hr><center>nginx</center> </body> </html> Exiting; failed to retrieve certificate and waitforcert is disabled when I check logs on puppet master [root@bangvmpllDA02 logs]# tail puppet_access.log [05/Dec/2012:17:45:18 +0530] "GET /production/certificate/ca? HTTP/1.1" 404 162 "-" "Ruby" [05/Dec/2012:18:32:23 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1" 404 162 "-" "-" [05/Dec/2012:18:33:33 +0530] "PUT /production/certificate_request/sl63anadi.XXX.com HTTP/1.1" 405 166 "-" "-" and the error logs show that nginx is not really able to process the request well 2012/12/05 18:33:33 [error] 25920#0: *23 open() "/etc/puppet/rack/public/production/certificate/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:33:33 [error] 25920#0: *24 open() "/etc/puppet/rack/public/production/certificate_request/sl63anadi.XXX.com" failed (2: No such file or directory), client: 10.209.47.26, server: , request: "GET /production/certificate_request/sl63anadi.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *27 open() "/etc/puppet/rack/public/production/certificate/ca" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate/ca? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" 2012/12/05 18:47:56 [error] 25923#0: *28 open() "/etc/puppet/rack/public/production/certificate_request/blramisr195602.XXX.com" failed (2: No such file or directory), client: 10.209.47.31, server: , request: "GET /production/certificate_request/blramisr195602.XXX.com? HTTP/1.1", host: "bangvmpllda02.XXX.com:8140" Passenger does not show any application groups either [root@bangvmpllDA02 nginx]# passenger-status ----------- General information ----------- max = 15 count = 0 active = 0 inactive = 0 Waiting on global queue: 0 ----------- Application groups ----------- [root@bangvmpllDA02 nginx]# here's my nginx configuration [root@bangvmpllDA02 logs]# cat ../nginx.conf user puppet; worker_processes 4; #error_log logs/error.log; #error_log logs/error.log notice; error_log logs/error.log info; #pid logs/nginx.pid; events { use epoll; worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; #tcp_nopush on; server_tokens off; #keepalive_timeout 0; keepalive_timeout 120; gzip on; gzip_http_version 1.1; gzip_disable "msie6"; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 3; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml; server { listen 80; server_name bangvmpllda02.XXXX.com; charset utf-8; #access_log logs/http.access.log main; location / { root html; index index.html index.htm index.php; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { access_log off; log_not_found off; deny all; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { access_log off; log_not_found off; expires 2d; } } # Passenger needed for puppet passenger_root /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18; passenger_ruby /usr/bin/ruby; passenger_max_pool_size 15; server { ssl on; listen 8140 default ssl; server_name bangvmpllda02.XXXX.com; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /etc/puppet/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } } and the puppet.conf [main] # The Puppet log directory. # The default value is '$vardir/log'. logdir = /var/log/puppet # Where Puppet PID files are kept. # The default value is '$vardir/run'. rundir = /var/run/puppet dns_alt_names = devops.XXXX.com,devops confdir = /etc/puppet vardir = /var/lib/puppet storeconfigs = true storeconfigs_backend = puppetdb thin_storeconfigs = false async_storeconfigs = false ssl_client_header = SSL_CLIENT_S_D ssl_client_verify_header = SSL_CLIENT_VERIFY # Where SSL certificates are kept. # The default value is '$confdir/ssl'. ssldir = $vardir/ssl any ideas where am I going wrong? I checkthe directory permissions; /usr/share/puppet, /etc/puppet and /var/lib/puppet (and files inside them) are owned by puppet user.

    Read the article

  • KVM CLI install for CentOS 6.3 defaults to Minimal Install

    - by i.h4d35
    So I now I've installed KVM (and its associated tools and packages- libvirt, VMM etc.). On the GUI (i.e using the VMM), installation works as its supposed to. However, when I try to create a VM using the command line interface, the OS (I am working with CentOS 6.3) defaults to a Minimal Install instead of giving me options to choose from at the time of installation. I am trying to install using the following command: virt-install \ --connect qemu:///system \ --virt-type kvm --name testVM2 \ --ram 512 --disk path=/var/lib/libvirt/images/testVM2.img,size=8 --vnc \ --cdrom /media/db18de8e-0853-49fb-80de-5c794d58a46f/CentOS-6.3- x86_64-bin-DVD1.iso \ --network network=default Specifying the OS-type or the OS-variant parameters doesn't make a difference. Is there something that I am missing out on or some other parameter that I must specify? Thanks in advance.

    Read the article

  • haproxy: Is there a way to group acls for greater efficiency?

    - by user41356
    I have some logic in a frontend that routes to different backends based on both the host and the url. Logically it looks like this: if hdr(host) ends with 'a.domain.com': if url starts with '/dir1/': use backend domain.com/dir1/ elif url starts with '/dir2/': use backend domain.com/dir2/ # ... else if ladder repeats on different dirs elif hdr(host) ends with 'b.domain.com': # another else if ladder exactly the same as above # ... # ... else if ladder repeats like this on different domains Is there a way to group acls to avoid having to repeatedly check the domain acl? Obviously there needs to be a use backend statement for each possibility, but I don't want to have to check the domain over and over because it's very inefficient. In other words, I want to avoid this: use backend domain.com/url1/ if acl-domain.com and acl-url1 use backend domain.com/url2/ if acl-domain.com and acl-url2 use backend domain.com/url3/ if acl-domain.com and acl-url3 # tons more possibilities below because it has to keep checking acl-domain.com. This is particularly an issue because I have specific rules for subdomains such as a.domain.com and b.domain.com, but I want to fall back on the most common case of *.domain.com. That means every single rule that uses a specific subdomain must be checked prior to *.domain.com which makes it even more inefficient for the common case.

    Read the article

  • Restarting nginx with Capistrano results in 502 Bad Gateway

    - by blee
    Here's what cap deploy does: sudo -p 'sudo password: ' -u root /var/rails_apps/fooapp/current/script/process/reaper reaper simply contains /etc/init.d/nginx restart When I run the same command from the shell, I do not get a 502--everything is fine. The nginx error.log is empty. Any thoughts on how to troubleshoot? Thanks in advance for your thoughts.

    Read the article

  • How do I debug this FS error on a flash device?

    - by abc
    I have console access to an embedded linux device. This device has flash memory part of which is partitioned as a FAT filesystem. Its running linux-2.6.31. However I am seeing these errors on the console these days and the FAT file system becomes read only. 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) I cannot understand why this happened? What is the root cause? And what is the fix? I would appreciate answers that can point me how to investigate the possible root cause of this issue on the device.

    Read the article

  • Turning a log file into a sort of circular buffer

    - by pachanga
    Folks, is there a *nix solution which would make the log file act as a circular buffer? For example, I'd like log files to store maximum 1Gb of data and discard the older entries once the limit is reached. Is it possible at all? I believe in order to achieve that a log file should be turned into some sort of special device... P.S. I'm aware of misc logrotating tools but this is not what I need. Logrotating requires lots of IO, happens usually once a day while I need a "runtime" solution.

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Unable to map to web folder using WebDAV client on Windows Server 2008 R2

    - by user74989
    I have a client running Windows Server 2008 R2 on several servers. One of the servers is also running SharePoint 3.0 and my client has created a web folder to map to. I can map to the web folder from all Server 2008 R2 boxes that have the WebDAV client (part of Desktop Experience feature) installed, except for the server the folder resides on. When I attempt to map to the web folder on the server which the folder resides, I am repeatedly prompted to enter my credentials. I am using the same account that I used to map the web folder on the other servers. I have also tried mapping from the command line and receive 'Access Denied' What may be causing the problem? I would think that if I can map to the drive from one server, I should be able to map the drive from the rest as long as the WebDAV client is installed, especially on the server where the folder is located. Jesse

    Read the article

  • How is virtual machine port opening works

    - by Xianlin
    I have a question regarding VM port. Say I have a Virtual Machine and a Host Machine. The opening ports on Host are 80, 22, 443 only. if I opened ports 80, 22, 443 VM it should be working. However if I opened port 21 on VM, will it work? If it works, does it mean the port 21 on Host is opened also? My understanding is that the network traffic goes from VM's virtual network adapter to Host's physical network adapter. So the ports on these 2 network adapters should match. Am I correct to say this?

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Small domain names

    - by Daniel Moura
    I'm want to register a small domain name. I want it to be easy to use from a mobile phone. The '.com' extension is hard to use because you have to press 'o' then 'm'. Does anyone has any suggestions of small domain extension names and where to buy?

    Read the article

  • GPO refresh error - Policy Refresh has not completed in the expected time. Exiting...

    - by Albert Widjaja
    Hi All, I'm having problem with my GPO changes, that I'd like to force to my terminal server users here's what I've done: I've made some necessary changes in one of the Domain Controllers to disable the GPO which applies to my Terminal Server user OU and then I go to the Terminal Server mstsc /admin console to perform the GPo refresh by using /force parameter, however I got this error instead: C:\Documents and Settings\Adminisratorgpupdate /force Refreshing Policy... User Policy Refresh has not completed in the expected time. Exiting... User Policy Refresh has completed. Computer Policy Refresh has not completed in the expected time. Exiting... Computer Policy Refresh has completed. but then the changes still got no effect yet as I logged in to the terminal server ? is there any way of how to make it in effect immediately please ? Thanks

    Read the article

  • I need to create a volume/symbolic link from a UNC Path

    - by Sebas
    I have a workstation with Windows XP and I need to make a symbolic link or mount a UNC Path like a local Drive. I need the same behavior that produces M-Daemon tools when you mount an .iso File but with a remote directory. This is because I have a software client that perform several task but only with local drives and directorys. The remote UNC path is a NAS server, thats the why I need to perform all the tasks from a workstations. Thanks a lot!

    Read the article

  • Changing the mac address in a libvirt xml config file breaks network connectivity for the guest

    - by foob
    I'm using Xen with libvirt and trying to set it up on a bridged interface. I am able to install an OS and everything works as I would expect. If I save the xml output from "virsh dumpxml guest", edit the mac address for the interface, and then define the domU with this new xml file I find that traffic is no longer forwarded from the vif0.0 interface to br0. The ifcfg-eth0 file on the guest was automatically updated to reflect the new mac address and the ifconfig output looks the same. Does anyone know why this is happening or how to properly change the mac address for a libvirt configuration?

    Read the article

  • Using ClearOS as a gateway/firewall/mailserver

    - by Elzenissimo
    Just installed ClearOS on a PC to act as our firewall firstly and then to act as an internal mailserver. My question is: Can i create a mailserver that then routes the mail through to our ISP mail server without having to contact the ISP and gain MX records etc..? We are a small business (5 PCs + dataserver) and the reason this is interesting is because we need to keep a record of outgoing mails from certain users, as well as spam and virus filtering.

    Read the article

  • Can you upgrade OEM Office with an OEM Upgrade

    - by LuckyLindy
    We have a bunch of computers at work that have OEM Office 2000. We have all the material, CDs, etc., and amazingly the computers still work well (they were top of the line when purchased in 2002). However, we'd like to upgrade to Office 2003, our corporate standard. We've found OEM Office 2003 upgrade software online for ~$60 apiece, which would save us thousands over installing retail upgrades or volume licenses. But can we do this? I haven't been able to get a clear answer from Microsoft or anyone else if OEM Upgrades can be applied by non-System Builders to OEM Office.

    Read the article

  • Securing a Windows Server 2008 R2 Public Web Server

    - by Denny Ferrassoli
    I'm setting up a public web server: Windows Server 2008 R2, IIS7.5. Does anyone have a tutorial / walkthrough / tips on properly securing a public web server? I've seen a few tutorials but mostly focused on Windows Server 2003. What I've done so far: Created a specific user account for the website / app pool, Renamed Admin account, Installed FTPS, Configured firewall to block any non-public service (web / https), Configured firewall to allow access to management interfaces only from specific IP addresses (rdp, IIS management, ftp) Maybe a few other things but can't remember at the moment... ICMP is allowed... Should I disable all except ping? Port scan reveals only web and https ports. Any other suggestions? Thanks

    Read the article

  • SQL Server 2008 Express - "Best" backup solution?

    - by alexn
    What backup solutions would you recommend when using SQL Server 2008 Express? I'm pretty new to SQL Server, but as I'm coming from an MySQL background I thought of setting up replication on another computer and just take Xcopy backups of that server. But unfortunately replication is not available in the Express Edition. The site is heavily accessed, so there has to be no delays and downtime. I'm also thinking of doing a backup twice a day or something. What would you recommend? I have multiple computers I can use, but I don't know if that helps me since I'm using the Express version.

    Read the article

  • Event ID 8021 The browser was unable to retrieve a list of servers from the browser master

    - by Ash
    We have a LAN where workstations are randomly losing network connectivity for brief moments of time. The workstations can also take a long time to login to the domain. During our troubleshooting we have found an error log on a few Windows 7 workstations: Warning BROWSER 8021 The browser was unable to retrieve a list of servers from the browser master \\random-pc on the network \Device\NetBT_Tcpip_{BBABCDE9-D8A0-4399-93F2-492FE0848B12}. The data is the error code. What do these errors mean? What computers should have the Computer Browser service enabled, workstations and/or servers? The environment is a mix of Windows 7 & Windows XP workstations on a Windows Server SBS 2011 SP1 domain.

    Read the article

  • High CPU usage when running a CentOS guest in VirtualBox

    - by sagi
    I am running CentOS 5.3 as a VirtualBox 3.0.0 guest running on Windows XP. The Windows host CPU usage is constantly at 50% although the CentOS guest is completely idle (i.e. 0.00 load average). I know this is a common problem related to the 1000Hz frequency that the CentOS kernel runs at, and previously a special kernel-vm packages were released to resolve the issue. However, these packages are out of date and the README says that they are not not necessary as of CentOS 5.3. I found out that there is supposedly a kernel parameter divider=10 that reduces the frequency to 100Hz with the standard kernel but it doesn't seem to have any effect when running inside VirtualBox. Is there any way to resolve the issue without resorting to a custom kernel?

    Read the article

  • How can I keep track of SQL Server updates?

    - by Adrian Grigore
    Hi, If I am not mistaken, SQL server cannot be automatically updated via the regular windows backup routine. Instead, there are cummulative updates that need to be installed by hand. I assume this is done for security and stability reasons. Is this correct? If so, how can I keep track of new updates without regularly reading SQL server related blogs? Is there any low-volume newsletter I can subscribe (ideally only announcing critical updates)?

    Read the article

  • IDN and HTTP_HOST

    - by Sandman
    So, when I want to link my users to a specific page I always use (in php): "http://" . $_SERVER["HTTP_HOST"] . "/page.php", to be sure that the link points to the page they're currently surfing (and not one of the server aliases). But with IDN names, HTTP_HOST is set to "xn--hemmabst-5za.net" (for example) - which of course works but doesn't look very nice. Is there a way to have HTTP_HOST set to the correct IDN name in these cases (in this case - "hemmabäst.net")? I rather do it in Apache before it comes to PHP, because otherwise I'd have to replace all my usage of $_SERVER["HTTP_HOST"]. Any ideas?

    Read the article

< Previous Page | 338 339 340 341 342 343 344 345 346 347 348 349  | Next Page >