Search Results

Search found 27521 results on 1101 pages for 'remote control'.

Page 395/1101 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • How Can We Create Blackbox Logs for Nginx?

    - by Alan Gutierrez
    There's an article out there, Profiling LAMP Applications with Apache's Blackbox Logs, that describes how to create a log that records a lot of detailed information missing in the common and combined log formats. This information is supposed to help you resolve performance issues. As the author notes "While the common log-file format (and the combined format) are great for hit tracking, they aren't suitable for getting hardcore performance data." The article describes a "blackbox" log format, like a blackbox flight recorder on an aircraft, that gathers information used to profile server performance, missing from the hit tracking log formats: Keep alive status, remote port, child processes, bytes sent, etc. LogFormat "%a/%S %X %t \"%r\" %s/%>s %{pid}P/%{tid}P %T/%D %I/%O/%B" blackbox I'm trying to recreate as much of the format for Nginx, and would like help filling in the blanks. Here's what Nginx blackbox format would look like, the unmapped Apache directives have question marks after their names. access_log blackbox '$remote_addr/$remote_port X? [$time_local] "$request"' 's?/$status $pid/0 T?/D? I?/$bytes_sent/$body_bytes_sent' Here's a table of the variables I've been able to map from the Nginx documentation. %a = $remote_addr - The IP address of the remote client. %S = $remote_port - The port of the remote client. %X = ? - Keep alive status. %t = $time_local - The start time of the request. %r = $request - The first line of request containing method verb, path and protocol. %s = ? - Status before any redirections. %>s = $status - Status after any redirections. %{pid}P = $pid - The process id. %{tid}P = N/A - The thread id, which is non-applicable to Nignx. %T = ? - The time in seconds to handle the request. %D = $request_time - The time in milliseconds to handle the request. %I = ? - The count of bytes received including headers. %O = $bytes_sent - The count of bytes sent including headers. %B = $body_bytes_sent - The count of bytes sent excluding headers, but with a 0 for none instead of '-'. Looking for help filling in the missing variables, or confirmation that the missing variables are in fact, unavailable in Nginx.

    Read the article

  • How can I switch Linux running OS from disk to running from RAM without restarting?

    - by vfclists
    Is it possible to switch to running Linux from RAM or RAM disk after starting starting initially from disk? eg. You need to make an image of your hard disk, FTP it to a remote location, some time later you want the image back, so you start the system from disk as usual, restore the image you FTP'd from the remote location back into place. More like a CloneZilla backup and restore, without booting the server from CD or USB disk, but starting from the normal hard disk? Notes on environment I should have mentioned it earlier. It is a remotely hosted VM where I cannot boot into a recovery console mode or do a netinstall. It will always boot onto the same disk. Which means that if there is some serious corruption I can't repair it offline, which is why being able to ftp a previously saved backup into place is so important

    Read the article

  • rsync osx to linux

    - by Nick
    I did a backup to a remote nfs folder with rsync, from a MAC to a Remote Debian. The final backup is 58GB less than the original. Rsync says that everything was OK, and nothing to update. Macintosh:/Volumes/Data1 root# du -sh Produccion/ 319G Produccion/ root@Disketera:/mnt/soho_storage/samba/shares# du -sh Produccion/ 260G Produccion/ can I trust in rsync? I'm using rsync -av --stats /Volumes/Data1/Produccion/ /mnt/red/ (/mnt/red is my samba mountpoint) Some differents folders root@Disketera:/mnt/soho_storage/samba/shares/Produccion/tiposok# du -sh * 0 IndoSanBol 0 IndoSans-Bold 0 IndoSans-Italic 0 IndoSans-Light 0 IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 12K TCL IndoSans_mac Macintosh:/Volumes/Data1/Produccion/tiposok root# du -sh * 36K IndoSanBol 40K IndoSans-Bold 36K IndoSans-Italic 36K IndoSans-Light 36K IndoSans-Regular 40K PalatinoLTStd-Black.otf 40K PalatinoLTStd-BlackItalic.otf 40K PalatinoLTStd-Bold.otf 44K PalatinoLTStd-BoldItalic.otf 44K PalatinoLTStd-Italic.otf 40K PalatinoLTStd-Light.otf 40K PalatinoLTStd-LightItalic.otf 40K PalatinoLTStd-Medium.otf 40K PalatinoLTStd-MediumItalic.otf 56K PalatinoLTStd-Roman.otf 160K TCL IndoSans_mac

    Read the article

  • Rollback driver in Windows via command line

    - by ultrasawblade
    I have a remote system in which I've updated the Nvidia graphics driver, but now RDPDD.dll, which I'm guessing stands for RDP Display Driver, will not load (found this out via Event Viewer). I can use Sysinternal's Psexec to execute commands though. I'm sure something has gone very wrong and the user of this system (a CAD engineer) won't be able to use his system normally. I've tried two remote reboots and that has not resolved the issue. So I would like to use the "Roll Back" option for the Nvidia driver in the device manager. Is there a command-line way of doing this?

    Read the article

  • Performance problems on RDC to our Terminal-Server Win2008 R2

    - by user40021
    Hello, we have a new win2008-R2 (2x Xeon X5570, 32GB RAM). There I have installed the terminal-server roles, thats all OK. Currently 5 users are working at this server, but all get a very low performance in their remote-desktop session. Sometime it takes 1-2 seconds till the keyboard-inputs will be performed and shown. At the clients I run RDP v 6.1 We have also an older TS-server (based on Win2003 with an Opteron-CPU), there are working about 20 users at the same time. There I do not have the troubles with the inputs at the remote-desktop sessions. Both servers are connected to the same gigabit-switch. Any ideas what could cause the troubles at the Win2008 R2? Many thanks in advance. Regards Chris

    Read the article

  • Windows Server 2008 R2 loses ability to connect to network share

    - by JamesB
    I could sure use some help with this one: I've got two Windows Server 2008 R2 x64 Terminal Servers, as well as several 2003 servers (DNS / Wins / AD / DC). On the two 2008 boxes, every now and then they will get in this mode where you can't map a drive to a random server. I say random server because it's not always the same server that you can't map to. Here is a summary of what I can and can't do: net view \\servername Sometimes this works, sometimes it does not. net view \\FQDN This always works. net view \\IPAddress This always works. ping servername Sometimes this works, sometimes it does not. ping FQDN This always works. ping IPAddress This always works. I've been looking all over for a solution to this. It sure seems like Microsoft would have a hotfix by now. The kicker to this is that it sometimes works great, especially after a reboot. It may run for 2 weeks just fine, but all of a sudden it will fail to resolve the remote server name. It will then be this way for a few days, then it might start working again. Also, while it's in the mode of not working, the other servers have no problem getting there. It's just these 2008 R2 Terminal Servers. Setting a static entry in the Hosts file and LMHosts does not make it work. All servers have static IPs and they are registered in DNS and Wins just fine. Here is a long thread on MS Technet of the exact same problem, but they don't have a good solution. Here is their workaround (It was from June of 2010): Good news - a hotfix is in the works and a workaround has been identified: Root cause is that since this is SMB1 all user sessions are on a single TCP connection to the remote server. The first user to initiate a connection to the remote SMB server has their logon-ID added to the structure defining the connection. If that user logs off all subsequent uses of that TCP session fail as the logon-id is no longer valid. As a workaround for now to keep the issue from happening you will want to have the user not logoff the Terminal Server only disconnect their sessions. Any word from anyone out there about a solution? Any help would sure be appreciated. Thanks, James

    Read the article

  • Leave Windows Session Logged On

    - by Kyle Brandt
    Is a bad idea for any reason to leave accounts logged onto Windows remote desktop sessions? So instead of logging off, just closing the session so it locks. In this case, the limited number of remote desktop connections is not an issue. I am just wondering if anyone has seen sessions leak memory over time or maybe security issues with doing this, etc... I could see if programs were left open they might suck up and or leak memory, but has anyone seen this with Microsoft software such as Control Panels, Management Consoles, and Exchange System Administrator?

    Read the article

  • "Access is Denied" when executing application from Command Prompt

    - by xpda
    Today when I tried to run an old DOS utility from the XP Command prompt, I got the message "Access is Denied." Then I found that most of the DOS utilities would not run, even though I have "full control" over them. They worked just fine a few weeks ago, and I have not made any OS changes other than Windows Upgrades. Then I tried running edlin.exe and edit.com from the Windows\system32 folder. Same result - "Access is Denied." I tried running these applications from Windows Explorer and got the message "Windows cannot access the specified device, path, or file. You may not have the appropriate permissions to access the item." I am logged in as a member of Administrators and have full control over these files. I tried logging in as THE Administrator, with no change. I checked the security settings on the files, and have full control over all of them. I have tried copying the files to different drives, booting in safe mode, and running without antivirus and firewall, all with no change. Does anybody know what could cause this?

    Read the article

  • Custom grub config hangs at the prompt

    - by drecute
    Please I need help with this custom grub: default=0 timeout=20 fallback=1 title Remote Install root (hd0,0) kernel /vmlinuz_remote lang=en_US keymap=us ks=nfs:192.168.128.42:/tftpboot/Kickstart/ks.cfg ksdevice=00:1A:64:22:32:4B headless xfs panic=60 initrd /initrd_remote.img I have 3 grub configs and I've been able to make the "Remote install" grub config to be the default run. At the moment it boots up but hangs at the prompt. Grub version: 0.98 The other 2 grubs that comes that exists after successful installation and update of the kernel are: splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (2.6.32-279.9.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-279.9.1.el6.x86_64 ro root=/dev/mapper/vg_serverprisa-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_serverprisa/lv_swap rd_NO_MD rd_LVM_LV=vg_serverprisa/lv_root SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-279.9.1.el6.x86_64.img title CentOS (2.6.32-279.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-279.el6.x86_64 ro root=/dev/mapper/vg_serverprisa-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_serverprisa/lv_swap rd_NO_MD rd_LVM_LV=vg_serverprisa/lv_root SYSFONT=latarcyrheb-sun16 crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet initrd /initramfs-2.6.32-279.el6.x86_64.img

    Read the article

  • How do I use XQuartz with ssh on OS X?

    - by cwd
    I've downloaded the latest stable version of XQuartz on my Snow Leopard machine, and I'm trying to make an ssh connection with X forwarding but X11 keeps opening. How can I get OS X to use XQuartz? I had X11 installed I downloaded and installed XQuartz X11 is not open / running XQuartz is open and running I try and connect to a remote system using iTerm2: ssh user@remote -X X11 opens. XQuartz is still open, but I doubt it is doing anything. I also tried moving X11 to the trash but then the ssh connection will not complete, even though XQuartz is open. I also get the two warnings which I don't understand how to fix, even after reading the ssh man page. Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding.

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Datacenter Backup Strategy

    - by EasyEcho
    What are common approaches to backup solutions in remote data centers? I am already familiar with general backup principals and have a very good backup strategy for our local data center but am having great difficulty extending it to a remote data center. We currently do a full backup on Friday, differential Mon - Thu, rotate offsite Friday morning ...rinse and repeat week after week. BTW, we use disks and have been very happy with this approach. We could buy a large storage server and backup everything to it, but this solution doesn't give you offsite. We could encrypt and upload to Amazon or some other online storage but that would take a large amount of time given the data and would be rather expensive paying for the bandwidth leaving the data center and receiving at amazon. We could drive to the data center every Friday and continue to rotate disks as we do now. But that just seems old fashion. What am I missing, are there better options?

    Read the article

  • vi visual mode doesn't work

    - by BobMarley
    I'm running vim (7.0.237) after sshing to a remote CentOS box, and it just won't enter visual mode. When I press 'v', it just beeps and does nothing. I'm running Ubuntu with GNOME Terminal, and the local copy of vi works fine, so I don't see how this could be a problem with the terminal. I have the same .vimrc file on the local and remote machines, and the only settings are: set nocompatible; set tabstop=4. I'm at a total loss here, any ideas?

    Read the article

  • Just how slow should my VPN be?

    - by David Heggie
    I have a VPN setup between a remote office with 2 ADSL connections (8Mb downstream, 512Kb upstream) and a main office with a 10Mb EFM connection (10Mb both up and down). The VPN is an IPSec connection using a DrayTek 2930 router for the ADSL and a DrayTek 3200 router at the EFM end. However, I'm unable to get speeds over this connection (tested with iperf) of anything over 600Kb or so down from the main office (traffic will pretty much always be from the main office to the remote office) Whilst I realise that there is an overhead and I'm never going to get the "full" bandwidth available over this VPN, I'd like to think that there's something I should be looking at that may help improve it. I've tried using the DrayTek's built-in "VPN Trunking" features which are supposed to load-balance connections, but this doesn't seem to improve matters much. I guess my question is - is this the kind of performance I should expect from this kind of setup and I'll just have to live with it, or should I be able to squeeze something more out of it through some VPN magic?

    Read the article

  • Terminal Server Install -- Add/Remove Program Message on NonAdministrator Machines

    - by Brandon
    First time poster... My company uses terminal services for one of our remote offices. Data is kept on a different server. OS is Windows Server 2003. Last week, they needed to purchase a program and have it installed on terminal server, which was done. An administrator session was started, the TS was put into install mode and the program was installed. The TS was then put back into execute mode. The install did not require a reboot. Since then, when any user who does not have administrator privileges initiates a remote session and logs in they get a security message that says "Add / Remove programs has been restricted. Please contact your system administrator." The only option is to click OK, which everyone does and after that there are no issues. Anyone have any idea why this is happening and how to fix? Thanks!

    Read the article

  • Will Windows 7 Home Premium access company domain?

    - by neurino
    I'm going to buy 3 lowest-cost possible pcs for new trainee starting in our company. I found some HP notebooks with Windows 7 Home Premium installed. Will users be able to access to company Windows domain (i.e. to log as MY_COMPANY\username)? Or do I need Windows 7 Pro? Which functionalities are missing in Home version? Remote Desktop? edit: about sharing folders I can, with my linux machine along with my domain user and password, join the samba shared folders and printers and this could be enough for our needs. Everithing Users need is: shared domain folders shared domain printers remote desktop to access server remotely

    Read the article

  • Hopping a VPN Tunnel

    - by lellouch
    My central office and remote offices are connected to each other over site to site ipsec vpn. We use Fortigate firewalls and everything is working fine. On the other hand, only central office is also connected to another company's network over ipsec vpn as well. In this situation, everything is also fine and employees at the central office is able to reach the other company's resources without problem. Now i want the employees working on our remote office can reach the other company's network over central office without creating new vpn tunnels. http://imgur.com/ozrXfGv How can i do that? Thanks for your answers in advance.

    Read the article

  • Using Quest AD cmdlets in an imported session

    - by ASTX813
    We are trying to use remote Powershell on our Exchange system: $rs = New-PSSession -ConnectionUri <uri> -ConfigurationName Microsoft.Exchange -Authentication Basic -Credential <username> -AllowRedirection Import-PSSession $rs After these commands, we can run Exchange cmdlets and all is well. However, we're unable to run any Quest Active Directory cmdlets. Yes, Quest is installed on the remote (as well as our local machines), and yes we are able to run those commands when running Powershell locally on the server. I tried -AllowClobber, but that didn't have an effect. Is there a way to get access to QAD?

    Read the article

  • Why do some machines respond with many RST packets instead of RST-ACK to refuse a connection?

    - by Michael J. Gray
    I have recently been trying to track down a problem with one of our systems and have noticed that it is simply not allowed to connect to a remote machine. However, the remote machine (not controlled by us) is responding to our request for a connection with many TCP RST packets on a different port (26469, 26497, 26498) than the one we originated on (53). It simply wouldn't let up at one point and flooded us with about 10 packets/second for an hour or two of only RST on those obscure high ports. Out of the thousands of nodes we're connecting to, this is the only one ever to show this behavior. What could possibly cause this? EDIT Below is a screenshot of Wireshark when it happened. I don't have the actual dump anymore and can't reproduce this specific scenario every time. Basically, we sent a SYN and immediately got RST on an odd port and so we respond with RST and just keep going back and forth.

    Read the article

  • getaddrinfo(3) failed

    - by user101289
    I'm trying to connect to a webservice using a PHP wrapper (which is using curl under the covers). On my local linux machine running PHP 5.3 it works perfectly. However, when I move to a remote server (also running PHP 5.3 on Linux) the call the the webservice URL returns: getaddrinfo(3) failed for http://server.host.com:8080/login I get a similar error from a ping on the remote host: ping: unknown host http://server.host.com:8080/login But when I issue a curl request from the command line, it returns the expected URL. Can anyone shed any light on this issue? Thanks!

    Read the article

  • Deleting slow on X11 emacs

    - by Malvolio
    I'm running GNU Emacs 21.4.1 on a (remote) remote Linux ((CentOS) box, using my MacBook as the X-server. Works fine, unless I try to delete a word, line, or region. Then it locks up for 30 seconds or so. It sounds like a minor thing, but you realize how often you do a delete when you have to stop for 30 seconds every time. My theory is that Emacs is trying to put the text in the X-server cut-and-paste buffer, which is trying to put it in the OSX cut-and-paste buffer and somewhere along the way, the process is blocked until it times out. (My only evidence for this theory is (a) copy-region behaves the same way and (b) deleted text doesn't show up in the buffer.) Any suggestions appreciated. Edit: (setq interprogram-cut-function nil) fixed me right up. Which makes perfect sense. Thanks, Trey.

    Read the article

  • How to duplicate a backup set from one media server to another

    - by MathematicalOrchid
    I really honestly can't figure out how to do this. It's easy enough to open Backup Exec and tell it to duplicate the data on one local device onto another local device. What I cannot figure out how to do is make it duplicate data from one local device to a remote device. I can connect to the remote BE server, but then I can only access the remove devices. I can connect to the local BE server, but then I can only access the local devices. I can't figure out how the heck to get access to both local and remove devices simultaneously. Symantec Backup Exec 12.5 for Windows, in case it matters.

    Read the article

  • How to manage large number of desktop VMs?

    - by symcbean
    I'm looking at the feasibility of providing remote access to multiple virtual machines. The VMs themselves will provide user desktops. To make best use of the available resources, I'd like the VMs to hibernate when the user disconnects. Which implies being able to start them up when a user connects. Ideally each user would 'own' a VM image - but if not then I'd require that the session was terminated. Obviously this would require the remote access protocol to be tied into the VM management. Is there anything out there to provide such functionality? (extra credit for open protocols! ;)

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >