Search Results

Search found 15129 results on 606 pages for 'orientation changes'.

Page 457/606 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • Trouble installing SSL Certificate on Apache

    - by jahufar
    We have a dedicated server with GoDaddy running Plesk that requires SSL. I've generated the certificate files and I created a vhost_ssl.conf (since I can't edit the default plesk apache configuration http.include, vhost_ssl.conf gets Included to httpd.include) that tells apache where to find the certificate files: SSLCertificateFile /usr/local/psa/var/certificates/domain.com.crt SSLCertificateKeyFile /usr/local/psa/var/certificates/domain.com.key SSLCertificateChainFile /usr/local/psa/var/certificates/sub.class1.server.ca.pem When I stop/start apache, it refuses to start up. The error_log does not have anything on it either (which is strange). Then I opened up httpd.include and found this bit: <VirtualHost 208.xxx.xxx.xxx:443> ServerName domain.com:443 ServerAlias www.domain.com UseCanonicalName Off SSLEngine on SSLVerifyClient none SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 Include /var/www/vhosts/domain.com/conf/vhost_ssl.conf Then I commented out SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 (which is plesk's SSL certificate) and restarted apache and it worked perfectly fine. It seems that Apache does not like multiple SSLCertificateFile within the same VirtualHost directive? As anyone who worked with plesk knows, I can't just remove SSLCertificateFile directive in httpd.include as plesk will overwrite my changes when someone uses it - which is why it's in vhost_ssl.conf. So I'm stuck and this is beyond my meager admin skills. Would appreciate someone who knows what (s)he's doing to tell me whats going on. Thanks in advance.

    Read the article

  • Excel not properly recalculating values

    - by gms8994
    I have an excel sheet with values in it (this sheet is generated by a custom perl script, but I don't think that's where the problem lies). In it, I have a formula: =sum(indirect(concatenate(address(6,column()),":",address(17,column())))) The purpose of this formula is to give me the SUM() of the cells in the current column, between rows 6 and 17. In Gnumeric Spreadsheet, as soon as I open the file, this works. But in Excel (both 2003 and 2007), opening the file gives #VALUE! errors in the fields with this formula, stating that the INDIRECT call with the values $B$6:$B$17 will result in an error. Here's the kink in the issue. If I edit the field (via F2), and make no changes, and hit enter, the values update. Also, it seems, if I save the file as .xlsx (Excel 2007 format), the values update upon opening. Unfortunately, I'm not sure that creating an xlsx is a possibility with the modules that I'm using, and many of our clients probably wouldn't be able to use it anyway. Any suggestions? Editing 200+ files every month for each client isn't going to be feasible, so if there's something I'm missing, please let me know.

    Read the article

  • SCCM 2012 Clients no longer detecting

    - by user3685428
    Here is the scenario I had a fully functioning SCCM 2012 site server with the DP, MP, SUP, Application catalog, etc. roles configured and working. There is only one server on this site. Everything was great but i was not happy with SUP, so i decided to create a separate WSUS server and configure Windows Updates through GPOs. That setup worked great as well so i went ahead and removed the SUP role from SCCM and removed the WSUS feature from my SCCM server (they were configured on the same SCCM Server). I did not notice any problems right away. A couple days later i noticed that the OSD deployments were giving errors, and after a couple hours of trying suggestions from Google, i was able to uninstall PXE and make a few changes and reinstall with WDS to get it working again. Again, thought everything was fine and continued on. The last couple days i have noticed that any new machine deployed or installing the Client will show in the SCCM console as "No" Client. The client machines will show connected to a site but the software center shows "IT Organization" instead of our site like the previous clients. The existing clients all seem to be functioning normally. they still receive application distributions and configuration baselines, etc. Reinstalling, uninstalling and reinstalling, repairing does not fix the problems and this happens on all new clients. ClientLocation.log shows it connecting to the correct MP. Nothing odd in any of the logs except for the ClientMessaging.log which repeats continuously this line: <![LOG[Raising event: instance of CCM_CcmHttp_Status { ClientID = "GUID:0450fde3-ab82-41bf-9c33-87a18113744b"; DateTime = "20140528214824.993000+000"; HostName = "SOUNDWAVE.domain.org"; HRESULT = "0x00000000"; ProcessID = 4092; StatusCode = 0; ThreadID = 3720; }; ]LOG]!><time="16:48:24.994+300" date="05-28-2014" component="CcmMessaging" context="" type="1" thread="3720" file="event.cpp:706"> thanks

    Read the article

  • Static Network configuration with bridge in CentOS 6.2

    - by Kyle
    I have a server with CentOS 6.2 installed, I want to use it as a VM host to run some windows installations for development purposes. I wanted to be able to directly RDP and serve websites from IIS on these windows server installations, so I figured I would set it up as bridged networking. I have been struggling with this all morning, usually the result being that when I brought up the bridge interface all network connectivity to the CentOS would go away, however, I think I finally have that all figured out. However, here's what happens. The eth0 and br0 interfaces are defined in /etc/sysconfig/network-scripts with ifcfg-eth0 and ifcfg-br0. I DO NOT have ifup or ifdown or any other files for these interfaces, I have not found if they are needed. I can login and use firefox to browse the web, however, running ifconfig reveals that my eth0 does not have an IPAddress, but the br0 does. I can actually RDP into the Windows installation, and browse the internet from there as well, but I cannot directly connect(via putty, vnc, nor viewing web pages) to the CentOS box. Any idea what's up? ifcfg-eth0 DEVICE=eth0 BOOTPROTO=none IPADDR=192.168.1.20 GATEWAY=192.168.1.1 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes BRIDGE=br0 ifcfg-br0 DEVICE=br0 TYPE=Bridge BOOTPROTO=static DNS1=192.168.1.1 DNS2=8.8.8.8 GATEWAY=192.168.1.1 IPADDR=192.168.1.2 NETMAS=255.255.255.0 ONBOOT=yes I know some of the options are inconsistent (DNS and BOOTPROTO) because I tried changing those in the eth0 file to make it work, and the changes haven't adversly affected web browsing or the other functionality Thank you!

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • Using git through cygwin on windows 8

    - by 9point6
    I've got a windows 8 dev preview (not sure if it's relevant, but I never had this hassle on w7) machine and I'm trying to clone a git repo from github. The problem is that my ~/.ssh/id_rsa has 440 permissions and it needs to be 400. I've tried chmodding it but the any changes on the user permissions gets reflected in the group permissions (i.e. chmod 600 results in 660, etc). This appears to be constant throughout any file in the whole filesystem. I've tried messing with the ACLs but to no avail (full control on my user and deny everyone resulted in 000) here's a few outputs to help: $ git clone [removed] Cloning into [removed]... @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0660 for '/home/john/.ssh/id_rsa' are too open. It is required that your private key files are NOT accessible by others. This private key will be ignored. bad permissions: ignore key: /home/john/.ssh/id_rsa Permission denied (publickey). fatal: The remote end hung up unexpectedly $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ chmod -v 400 ~/.ssh/id_rsa mode of `/home/john/.ssh/id_rsa' changed from 0440 (r--r-----) to 0400 (r--------) $ ll ~/.ssh total 6 -r--r----- 1 john None 1675 Nov 30 19:15 id_rsa -rw-rw---- 1 john None 411 Nov 30 19:15 id_rsa.pub -rw-rw-r-- 1 john None 407 Nov 30 18:43 known_hosts $ set | grep CYGWIN CYGWIN='sbmntsec ntsec server ntea' I realize I could use msysgit or something, but I'd prefer to be able to do everything from a single terminal Edit: Msysgit doesn't work either for the same reasons

    Read the article

  • Hubs/switches taking out switches?

    - by Bart Silverstrim
    Here's the issue...we have a network with a lot of Cisco switches. Someone plugged in a hub on the network, and then we started seeing "weird" behavior; errors in communication between clients and servers, or network timeouts, dropping network connections, etc. It seemed that somehow that hub (or SOHO switch) was particularly freaking out our Cisco 3700 series switches. Disconnect that hub or netgear-type SOHO switch and things settled down again. We're in the process of trying to get a centralized logging server for SNMP and management, etc., to see if we can trap errors or narrow down when someone does this sort of thing without our knowledge because things seem to work, for the most part, without issue, we just get freaky oddball incidents on particular switches that don't seem to have any explanation until we find out someone decided to take matters into their own hands to expand available ports in their room. Without getting into procedure changes or locking down ports or "in our organization they'd be fired" answers, can someone explain why adding a small switch or hub, not necessarily a SOHO router (even a dumb hub apparently caused the 3700's to freak out) sending DHCP request out, will cause issues? The boss said it's because the Cisco's are getting confused because that rogue hub/switch is bridging multiple MAC's/IP's into one port on the Cisco switches and they just choke on that, but I thought their routing tables should be able to handle multiple machines coming into the port. Anyone see that behavior before and have a clearer explanation of what's happening? I'd like to know for future troubleshooting and better understanding that just waving my hand and saying "you just can't".

    Read the article

  • SSL stops working on IIS7 after a reboot

    - by Mark Seemann
    I have a Windows 2008 Server with IIS7. Every time the server reboots, SSL stops working. Normal HTTP requests work fine, but any request to an HTTPS address gives the typical error message in the browser: Cannot find server or DNS I can temporarily fix it by opening IIS Manager and bring up the Bindings… window for the website in question. Then I select “https”, click on “Edit” then click “Ok” without making any changes to the settings. After doing this, browsing to https:// works again until the next reboot. This issue look as lot like the one described here, but according to the Certificates MMC snapin, the certificate in question does have a private key. I'm also pretty sure that I never installed the certificate in the personal store, but imported it straight into the machine store, but it's been a while... There's not a lot in the event log apart from the event ID 36870 also described in the post I linked to. Can anyone help me troubleshoot this issue so that SSL will work even after a server reboot?

    Read the article

  • I accidentally hijacked my localhost

    - by Zach L
    Opening localhost in the browser is pointing a local webpage (examplePage) after playing with some config files a while back, and I can't figure out how to restore the default behavior. Background: I have XAMPP installed on my Windows 7 machine, and a webpage at c:/xampp/htdocs/examplePage. A couple weeks ago, I was on a mission to get sites root-relative urls (/resource) to work, so I played around with a bunch of apache/conf files, including httpd.conf and httpd-vhosts.conf and also was messing with the Windows hosts file. I gave up at some point, didn't document exactly what I did, and have since probably forgotten some of what I did. Many of my changes stemmed from suggestions in this StackOverflow post What I've Tried I commented out my additions to the hosts file I turned off XAMPP (thus hopefully negating any apache config file effect) I reverted to my original DocumentRoot in httpd.conf anyway (xampp/htdocs) localhost still displays examplePage. Even with xampp turned on (my reverted DocmentRootisn't taking effect) Does anyone know what I may have done and how I can fix it? Update : Its been resolved, thank everyone so much in taskmanager, theres a couple instances of httpd.exe (Apache HTTP Server). I ended these, and opened XAMPP, restarting apache. all references to examplePage in my .conf files that I could find had been commented out or removed. I imagine that the old versions were still in effect for some reason, and manually ending the Apache processes fixed this. As a point of interest, Its still a mystery why those processes were running - I cannot reproduce that situation. I must've stumbled upon a XAMPP bug of some sort.

    Read the article

  • grepping a substring from a grep result

    - by allentown
    Given a log file, I will usually do something like this: grep 'marker-1234' filter_log What is the difference in using '' or "" or nothing in the pattern? The above grep command will yield many thousands of lines; what I desire. Within those lines, There is usually one chunk of data I am after. Sometimes, I use awk to print out the fields I am after. In this case, the log format changes, I can't rely on position exclusively, not to mention, the actual logged data can push position forward. To make this understandable, lets say the log line contained an IP address, and that was all I was after, so I can later pipe it to sort and unique and get some tally counts. An example may be: 2010-04-08 some logged data, indetermineate chars - [marker-1234] (123.123.123.123) from: [email protected] to [email protected] [stat-xyz9876] The first grep command will give me many thousands of lines like the above, from there, I want to pipe it to something, probably sed, which can pull out a pattern within, and print only the pattern. For this example, using an the IP address would suffice. I tried. Is sed not able to understand [0-9]{1,3}. as a pattern? I had to [0-9][0-9][0-9]. which yielded strange results until the entire pattern created. This is not specific to an IP address, the pattern will change, but I can use that as a learning template. Thank you all.

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • One Comcast Business Gateway, One Router, Two Web Servers

    - by Kevin Scheidt
    I have a Comcast business account with a router and a web server (info) attached. behind the router there are multiple computers and a second web server (info) which also serves as a file server. (info) has two nics in it. One direct to comcast and one connected to the router. It needs to serve the world it's websites. It needs however, to also be able to see all the internal computers and (com)'s served files. With just 1 nic (the one connected to the router, not comcast), (info) works fine but no one outside can see it. (com) services port 80 and (info) needs to handle port 80 as well. I have two domain names registered, and 5 static ip's from comcast. right now h t t p: / /www.graceamazing.com handled by (com) works fine and h t t p: / /www.graceamazing.com:1307 handled by (info) works fine. but as soon as I enable the 2nd nic in (info) h t t p: / /www.graceamazing.info runs extremely slow (Horribly slow). however, h t t p: / /www.graceamazing.com:1307 and .com work fine. (com) has an ip address via the router 70.89.233.41 (info) has a ip addy of 70.89.233.46 via comcast (2nd nic) and a internal ip of 192.168.x.100 via static behind the router. Any suggestions or changes to make that will make h t t p: / /www.graceamazing.info perform with the same speed it has when going through h t t p: / /graceamazing.com:1307 is there a setting I should check / could have misssed?

    Read the article

  • subdomain .htaccess redirection via ssh remote port forwarding

    - by Achim
    I ask you to help me URL redirecting a subdomain to a SSH remote forwarded port: The current setup is the following: The server A have a local webserver running on port 80. This server is connected to a DSL line or a GPRS connection where the IP address changes often. To prevent a DynDNS setup we established a SSH remote port forwarding to a server B with a static IP adress. This is done on server A by the following statement: ssh -N -p 80 -g -R 10000:localhost:80 tunneling@<Server B IP> So by accessing the new port 10000 of the servers B IP-adress, all traffic is forwarded to the server A port 80 - this works fine! But to offer a more comfortable url to the user I want to hide the server B IP-adress and offer a subdomain. My domain provider allows to add subdomains and redirections to some other servers. In general, this works, I've tested this with different servers. But it don't work if the destination is the port forwarded port of server B. The initial redirection is done, the request is send to server A and the response are forwarded to server B and shown in the browser - fine. But then the URL within the browser is switched away from the subdomain to the IP:port of server B. So the user don't see the subdomain in the URL string of the browser anymore. I've tried this with my providers subdomain redirection, as well as .htaccess redirect, as well as META refresh, the problem always persist. Is there a parameter in the ssh reverse forwarding setup (I guess this is the place where the fix have to be) to keep the typed in subdomain URL and not show the IP. Thanks Achim

    Read the article

  • VMware and Windows Activation

    - by Peter M
    Yesterday I installed Slysoft's Virtual CloneDrive in order to mount an iso for some software installation on my host system (XP Pro SP3) This morning I fired up VMware and made a linked clone of an existing XP vm in order to do some software testing. This is the sort of thing that I do all the time, and the base XP vm that I clone was activated a long long time ago. The surprise today was that the newly cloned vm was no longer activated and XP cited major changes in hardware as the reason. I repeated the test with a full clone of the base system and got the same message. I then started up my base vm and it seemed to be activated, yet another vm (which I fully cloned from the base vm a long time ago) now started reporting that XP was not activated. At this point I guessed that Virtual DriveClone might have been the source of my hardware differences so I uninstalled it and rebooted. After this I made a new linked clone and full clone of the base vm and XP did not complain about not being activated. So I seem to be back to where I was before installing Virtual DriveClone with the exception that that one particular XP vm continues to complain about activation (even though 4 or 5 other XP vm's are fat and happy) Now to my questions: Why would adding Virtual CloneDrive to the host system affect XP activation on the vm's? From their point of view I would have thought that the environment had not changed as I had not enabled any new hard drives in their systems. Or is adding a hard drive to the host system enough to upset XP activation? Since this event, one of my fully cloned vm's is still reporting that XP is not activated even though I have removed Virtual CloneDrive. Is there anyway to convince XP that it is on the same system as yesterday? Or are my only options to do an activation or restore the vm from a previous backup?

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • Host spreads wrong MAC Adress of router on the WIFI

    - by JavaIsMyIsland
    Strange things are going on our network. Since yesterday a host which is actually not on our subnet spreads wrong ARP Replys on our network. To be precise, only on the WIFI. If I connect my Laptop to the cable ethernet, it gets the right MAC adress of the router. Also my Android phone and my Ubuntu system do get the right MAC Adress. So I took a look at wireshark. When I clear the ARP cache of the windows machine, the first ARP response is correct and comes from the router. But like 10 ms later another ARP response comes from another host in the WIFI. The host changes its IP Adresses from time to time and they look like they are not on our subnet. So I can not use the internet because DNS is not working anymore. Sometimes the router wins the race condition and the mac adress is set correctly in the arp cache. I first thought, this is an arp-poisoning mitm attack but it does not make sense if the packets get not routed correctly?! I restarted the router but it didn't help. I have no access to the router, else I would change the shared key to make sure there is no intruder on the wifi.

    Read the article

  • How to use different scaling for every monitor?

    - by Mikael Koskinen
    One of the new features in Windows 8.1 is the new "Desktop display scaling", which allows user to configure scaling per monitor. I've been trying to get this working in preview but with no success. If I configure the scaling, it always affects all of my monitors. I have two monitors, the main one with a higher resolution and the secondary with "normal" resolution. The secondary monitor is used in portrait mode. I would like to configure the main monitor's scaling as the text is currently too small. Here's how things look at the configuration screen by default: Now if I adjust the scaling, click apply and do relogin, everything is bigger. On both of my monitors. I haven't clicked the "Let me choose one scaling level for all my displays", but still the slider seems to affect both of them. If I check the "Let me choose one scaling level", the UI changes to look similar to what we have in Windows 8: Still the problem persists. The scaling is applied to both of my monitors. So, it doesn't matter if I check the box or not, the scaling is always applied to all the displays. Any idea how I could get this to work in preview? I've read some comments which seem to indicate that this should work in preview, though Paul Thurrott mentioned at his Winsupersite article that he either didn't get this to work.

    Read the article

  • Edit-text-files-over-SSH using a local text editor

    - by Mikko Ohtamaa
    I am working in various Linux and UNIX environments. I'd like to elegantly solve the problem of editing remote configuration files over SSH. Instead of using terminal editors (nano), I'd like to open the file in a local text editor on my desktop (Sublime Text 2). CyberDuck, WinSCP and various other SFTP apps can do this. Using editors over X11 forwarding has also proven to be problematic. Also using archaic text editors like Vim or Emacs do not serve my needs well. They could do this, but I prefer using other text editing software. Using ssh mounts (FUSE) are also problematic unless they can happen on the demand and triggered by the remote site. So what I hope to achieve Have a somekind of easily deployable shell script etc. which I can copy to remote server (let's call it mooedit) I run mooedit command on the remote server of which I have connected over SSH connection mooedit sends some kind of signal (over SSH( to my local desktop On my local desktop this signal is captured and it determines 'a ha! moo wants to edit a file on server X in folder Y' File is SFTP transfered to the local desktop (/tmp) File is opened in a nice GUI text editor on the local desktop When Save is pressed, the local desktop notices changes in the file and SFTP sends the resulting file back to the server The question is: What signaling mechanisms SSH provides for this? Any other methods to trigger a local text editor for remote SSH file?

    Read the article

  • Multi-IP address zimbra server DNS PTR records and spam

    - by David Fraser
    We have a mail server running Zimbra (ZCS 6.0.8). The server has 5 active public IP addresses in the same subnet. (.226-.230). I currently have A records for each of these (host0.domain.com..host4.domain.com), with the main host.domain.com of the machine pointing to .226. Our host has ended up being listed on the SORBS DUHL list (even though it's in a server farm). According to them you can get removed quickly by checking that your host has an MX record, an A record, and a PTR record that points back to the hostname given in the MX record. I tried setting the PTR records so that each of these addresses resolved back to their A record (i.e. .228 had a PTR to host2.domain.com). However, I then got mail being rejected from other servers because when Postfix (under Zimbra control) sends out mail, it uses the main hostname for the HELO - there doesn't seem to be any way to override it. So the PTR records currently say host.domain.com for all 5 IP addresses. What's the correct way to handle this? Should I have an A record for the domain that points to all the IP addresses (for round-robin handling)? I'm nervous of changes that could cause problems, so I'm wondering what the standard way to handle a multiple-IP-address mail server is.

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Linux Tuning for High Traffic JBoss Server with LDAP Binds

    - by Levi Stanley
    I'm configuring a high traffic Linux server (RedHat) and running into a limit I haven't been able to track down. I need to be able to handle sustained 300 requests per second throughput using Nginx and JBoss. The point of this server is to run checks on a user's account when that user signs in. Each request goes through Nginx to JBoss (specifically Torquebox with JBoss A7 with a Sinatra app) and then makes an LDAP request to bind that user and retrieve several attributes. It is during the bind that these errors occur. I'm able to reproduce this going directly to JBoss, so that rules out Nginx at least. I get a variety of error messages, though oddly JBoss stopped writing to the log file recently. It used to report errors about creating native threads. Now I just see "java.net.SocketException: Connection reset" and "org.apache.http.conn.HttpHostConnectException: Connection to http://my.awesome.server:8080 refused" as responses in jmeter. To the best of my knowledge, I have plenty of available file handles, processes, sockets, and ports, yet the issue persists. Unfortunately, I have very little experience tuning servers. I've found a couple useful documents - Ipsysctl tutorial 1.0.4 and Linux Tuning - but those documents are a bit over my head (and just entering the the configuration described in Linux Tuning doesn't fix my issue. Here are the configuration changes I've tried (webproxy is the user that runs Nginx and JBoss): /etc/security/limits.conf webproxy soft nofile 65536 webproxy hard nofile 65536 webproxy soft nproc 65536 webproxy hard nproc 65536 root soft nofile 65536 root hard nofile 65536 root soft nproc 65536 root hard nofile 65536 First attempt /etc/sysctl.conf sysctl net.core.somaxconn = 8192 sysctl net.ipv4.ip_local_port_range = 32768 65535 sysctl net.ipv4.tcp_fin_timeout = 15 sysctl net.ipv4.tcp_keepalive_time = 1800 sysctl net.ipv4.tcp_keepalive_intvl = 35 sysctl net.ipv4.tcp_tw_recycle = 1 sysctl net.ipv4.tcp_tw_reuse = 1 Second attempt /etc/sysctl.conf net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.netdev_max_backlog = 30000 net.ipv4.tcp_congestion_control=htcp net.ipv4.tcp_mtu_probing=1 Any ideas what might be happening here? Or better yet, are there some good documentation resources designed for beginners?

    Read the article

  • Passing PATH through sudo

    - by whitequark
    In short: how to make sudo not to flush PATH everytime? I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes). All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel. To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path. Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and keepenv didn't either. So the question: how to add a directory to PATH or keep user's PATH in sudo?

    Read the article

  • windows Home Server backup error

    - by domen
    I've finally built my WHS, but some other problems have showed up. I've googled, binged and searched SU with no success. The problem is following: at the moment I've got a Win7 laptop and Win7 PC, which should be backed up by the WHS. Laptop is backed up just fine with no issues, but when I try to manually backup the PC, after "backup is starting" message, when backup service should be monitoring changes on partitions, PC gets disconnected from the home network and thus, the backup process is stuck. Disabling/enabling network adapter gets PC back on the network. The only thing I've tried was reinstalling connector software. no success. Also, I've downloaded connector troubleshooter and only thing it says is "DHCP server was not found". I'm not good with networks, so I couldn't figure out what could that indicate (all computers in the network are assigned static IPs). Any ideas what the problem can be? I can provide any additional information, I'm just not sure what may be helpful right now. Thanks.

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • windows clients cannot get dns resolution until you open and close ipv4 properties page

    - by GC78
    This strange problem has started recently. Some windows clients cannot seem to get dns resolution to the internet after boot, and sometimes again at some point in the day. Internal hosts are also slow to resolve. trying to ping an interal host by name will take a long time for the hostname to resolve to ip address and trying to ping a website by name will fail to resolve. If you go into the tcp/ip v4 properties and view but not change anything, ok/close out of that then the client starts working fine, hostnames will resolve quickly. I have seen this happen on both Vista and W7 clients. ipconfig /all at a client experiencing this problem shows everything in order. proper ip addr, gateway, dns server, dns suffix ect.. ipconfig /dnsflush will not fix them, neither will /release and /renew the clients get their ip address, mask and dns server info from either one of 2 OES dhcp servers that assign addresses in different scopes in the same subnet. the internal dns server is a different OES dns server the default gateway is not assigned by the OES server but is statically put in at the client (only for those who need to get to the Internet for their job) flat network topology What can I do to get to the bottom of this? It only happens to a few of the client machines and typically the same ones. It started happening when we made a change to one of the DHCP scopes in iManager. Strangly this problem only happens to clients that get an IP address from the scope that we didn't make any changes to.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >