Search Results

Search found 11954 results on 479 pages for 'gets'.

Page 318/479 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • New build won't boot: some fans turning, no beep or video

    - by Dave
    When my new system is powered up, the case fan and power supply fans turn fine. The CPU fan twitches, but never gets going. Although I've heard that with AMDs and Gigabyte motherboards that is not necessary a problem. Hard drive is spinning. However, there is absolutely no indication that anything else is happening. The motherboard, as far as I can tell, does not have an internal speaker, but I harvested one from another machine and plugged it in and still no beeps at all. The monitor screen stays black, on both the integrated VGA and DVI. This is a brand new build, and has never successfully booted. My parts are: AMD Athlon II X2 245 Regor 2.9GHz Socket AM3 65W Dual-Core Processor Model ADX245OCGQBOX - includes CPU cooler) GIGABYTE GA-MA785GPMT-UD2H AM3 AMD 785G HDMI Micro ATX AMD Motherboard - Retail G.SKILL Ripjaws Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model F3-10666CL8D-4GBRM - Retail CORSAIR CMPSU-400CX 400W ATX12V V2.2 80 PLUS Certified Compatible with Core i7 Power Supply - Retail SAMSUNG Spinpoint F3 HD502HJ 500GB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive COOLER MASTER Elite 341 RC-341C-KKN1-GP Black Steel MicroATX Mid Tower Computer Case - Retail I also have a DVD burner, but it acts the same whether that is plugged in or not. I'm using the on board video. What I've tried so far: I've switched power supplies, with no difference. I've tried different monitors (of which all are working on other machines) with no difference. I have tried putting it one memory module at a time, with no difference. I have tried the absolute minimum I can think of (power supply into motherboard, power button ONLY plugged into front panel, CPU fan plugged in), with no difference. I appreciate any ideas anyone might have. Do I need to RMA the motherboard? This is my first build, so there might be something obvious. I was very careful in assembly with static; I'm confident nothing was zapped during assembly.

    Read the article

  • How browsers handle multiple IPs

    - by Sandman4
    Can someone direct me to information on exact browsers behavior when browser gets multiple A records for a given hostname (say ip1 and ip2), and one of them is not accessible. I interested in EXACT details, like (but not limited to): Will browser get 2 IPs from OS, or it will get only one ? Which ip will browser try first (random or always the first one) ? Now, let's say browser started with the failed ip1 For how long will browser try ip1 ? If user hits "stop" while it waits for ip1, and then clicks refresh which IP will browser try ? What will happen when it times-out - will it start trying ip2 or give error ? (And if error, which ip will browser try when user clicks refresh). When user clicks refresh, will any browser attempt new DNS lookup ? Now let's assume browser tried working ip2 first. For the next page request, will browser still use ip2, or it may randomly switch ips ? For how long browsers keep IPs in their cache ? When browsers sends a new DNS request, and get SAME ips, will it CONTINUE to use the same known-to-be-working IP, or the process starts from scratch and it may try any of the two ? Of course it all may be browser dependent, and may also vary between versions and platforms, I'd be happy to have maximum of details. The purpose of this - I'm trying to understand what exactly users will experience when round-robin DNS based used and one of the hosts fails. Please, I'm NOT asking about how bad DNS load balancing is, and please refrain from answering "don't do it", "it's a bad idea", "you need heartbeat/proxy/BGP/whatever" and so on.

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • How to effectively have less php-cgi processes running?

    - by João Pinto Jerónimo
    My server is a Linode 512, and on it I run a Wordpress MU with 3 websites (they don't get a lot of visitors) and a couple of NodeJS apps. I need to switch to Lighttpd because Apache 2 was using about 59% of the server's RAM, and now I have the php-cgi processes taking up about 43.6% of the server's RAM: most often 2 processes use 16.5% of the RAM each, 4 processes use 1.8% of the RAM each, and 4 more processes use 0,8% of the RAM, each How can I have less of these processes ? I'm almost sure they're not all needed for the trafic this server gets... I tried only allowing 2 children, but I still have those 10... This is my fastcgi.server section in lighttpd.conf. fastcgi.server = ( ".php" => ( "localhost" => ( "socket" => "/var/run/lighttpd/php-fastcgi.socket", "bin-path" => "/usr/bin/php-cgi", "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "4000" ) ) ) ) What else can I do to tune lighttpd to use less RAM ?

    Read the article

  • mount multiple folders with nfs4 on centos

    - by microchasm
    I'm trying to get nfs4 working here. Machine 1 (server) I have a folder and in it 2 other folders I'm trying to share independently. /shared/folder1 /shared/folder2 Problem is, I can't seem to figure out how to mount the folders independently on the client. (Machine 1 - server) /etc/exports: /var/shared/folder1 192.168.200.101(rw,fsid=0,sync) /var/shared/folder2 192.168.200.101(rw,fsid=0,sync) ... exportfs -ra (Machine 2 - client) /etc/fstab: 192.168.200.201:/folder1/ /home/nfsmnt/folder1 nfs4 rw 0 0 ... mount /home/nfsmnt/folder1 mount.nfs4: 192.168.200.201:/folder1/ failed, reason given by server: No such file or directory The folder is there. I'm positive. I think there is something simple I'm missing, but I'm totally missing it. It seems like there should be a way in fstab to tell nfs which folder on the server I want to mount. But I can only find references to what looks like a root mount point (e.g. 192.168.1.1:/) which I assume is handled by exports on the server. But even with the folders set up in exports, there doesn't seem to be an apparent way to pich and choose which gets mounted. Is it not possible to mount separate folders from the same server to different mount points on the client? Any help appreciated.

    Read the article

  • I want to use OpenVPN to access the web and email from China. How?

    - by gaoshan88
    My question: How do I use my already existing OpenVPN setup to enable secure, remote web surfing and email checking from open wireless hotspots? Some long winded details: I am running Ubuntu and have OpenVPN up and working fine as a server. My client machine connects fine as well. However, that just gets me a secure connection to my home network. What I want is to be able to access my VPN server and surf the web or check email securely from anywhere with an open wireless connection. I am frequently in China and having secure, unblocked access would be a boon (especially since I like to work from tea houses and coffee shops and I've already had a password sniffed and hacked once). I already know how to tunnel over SSH via a SOCKS proxy using something like: ssh -ND 8887 -p 22 [email protected] but since I have OpenVPN I figure why not try it? So... what are the steps involved in making it so I can connect to my VPN and the surf and check mail to my hearts content (slowly to be sure but at least it wold be secure). Thx!

    Read the article

  • KVM Guest Reboot Loop

    - by javano
    I have been pulled into a situation where a KVM server (CentOS 6.2) lost power and upon reboot one of the guests hasn't started up again (XP SP3). I have SSH'ed in and someone must have changed something relating to the hyper visors prior to the power loss, but not rebooted all the guests. This particular guest wouldn't start because it was configured to use /usr/bin/qemu-system-x86_64 which isn't there now (assuming it was before?). I changed it to use /usr/libexec/qemu-kvm as this is what all the other guests on this server seem to be using, and its booting up. Using virt-manager on my local machine I can connect to the display of the XP machine and it gets as far as this screen; http://support.gateway.com/emachines/issues/2-1131285152-01.gif The problem I face now, is that which ever option I choose the machine just reboots. So it's and endless loop. I thought that perhaps a file system error maybe present due to the unclean shutdown. There is an XP SP3 ISO mounted under the guest, which I booted from in an attempt to access the recovery tools, but I don't have the Administrator password! I am out of ideas, and it's turning out to be quite the conundrum. Should I use a 3rd party live CD to test the FS for errors? How else can I trouble shoot these restarts?

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

  • 503 Error After Microsoft Request Routing Is Installed - 32 bit 64 bit madness

    - by KenB
    I have a requirement to install the Microsoft Request Routing component for IIS 7.5 running on a Windows 2008 R2 SP1 64Bit machine. After installing Microsoft Request Routing via the Web Platform installer our ASP.NET 4.0 application gets a "HTTP Error 503. The service is unavailable." The Windows event log error details says: The Module DLL 'C:\Program Files\IIS\Application Request Routing\requestRouter.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a AMD64 processor architecture. The data field contains the error number. To learn more about this issue, including how to troubleshooting this kind of processor architecture mismatch error, see http://go.microsoft.com/fwlink/?LinkId=29349. I can make this error go away by changing the application pool to run in 32 bit mode by changing the "Enable 32-Bit Applications" setting to true. However I would prefer not to have to do that to resolve the issue. My questions are: Why is the Microsoft Request Routing feature trying to load a 32 bit version, isn't there a 64 bit version for it? How do I resolve this issue without having to change my application pool to a 32 bit mode?

    Read the article

  • Custom keyboard map is causing issues with stuck keys

    - by Grumbel
    I have a Microsoft Ergonomic 4000 keyboard and I am running a custom keymap (dvorak with some stuff for umlauts): http://pingus.seul.org/~grumbel/tmp/md5/b054e11505c88e1bfc6ebd5da46bdb78-xmodmap_pke http://pingus.seul.org/~grumbel/tmp/md5/f5e42a5b8ba4a034c5945f719b3d2608-xmodmap_pm This used to work fine for years and it still does, except that I am now having issues with a stuck Mode_switch key. When I hit Control_R and Mode_switch at the same time (happens a lot by accident), the Mode_switch key gets into a 'stuck' state, all letters I type afterwards come out in their umlaut form as if Mode_switch is pressed. I can unstuck the Mode_switch by again hitting Control_R and Mode_switch at the same time, but that leaves Gnome in a broken state where it doesn't react to my Gnome keyboard shortcuts any longer. The key presses themselves are still registered by the window manager as one can see changes in the applications (cursor in Gnome Terminal will turn into an unfilled rect, as if the application lost focus), but don't trigger the bound action. Does anybody have a clue what could be causing this? Or does anybody has an idea how I could debug this? xev doesn't seem to help here, as it is reporting normal KeyPress/KeyRelease events, even when the key is stuck. Also the Gnome key bindings don't get reported at all when its in the 'broken' state. I assume they are captured by the window manager before they even reach xev. I am using Ubuntu 10.04 with Gnome and Metacity, I have disabled all OpenGL related effects, so Compiz shouldn't interfere. Some general info on which applications are involved in Gnomes key binding handling would be helpful as well, as I assume its metacity, but restarting metacity doesn't fix the issue.

    Read the article

  • Having problems VPN'ing into our Windows server network.

    - by Pure.Krome
    Hi folks, When two people (on their notebooks) try to VPN to our office, only the first user gets a connection. the second user always times out. Is it possible for VPN to allow two or more people, using / sharing the same EXTERNAL PUBLIC IP to connect/authenticate? Now for some specifics (cause those two statements are very broad). I'm not in the IT Dept. I'm a developer. Our IT Dept don't really care (sigh) so it's up to me to fix this crap. Our office is all Microsoft shop stuff - servers and clients. We also have a firewall (watchguard brand?) and some other crazy setups (yes i know, it's very vague :( ). So i'm wondering - is it possible for multiple users, from the same public IP, to connect via VPN to a windows server? i'm under the impression - yes. But it is possible that this only happens when the clients (who are all behind the single, public IP .. otherwise they will have their OWN ip's) need to have UPnP running or something? this is killing me and i need to start asking the right questions cause these guys don't know what they are doing and i can't work without this happening. I know this is a vauge question with so many 'if-what's-etc' but maybe some questions/suggestions from you guys might start to lead to solving this problem. EDIT: Network Connection: WAN Miniport (PPTP)

    Read the article

  • Redeploy using Active Directory

    - by Noam Gal
    I am trying to use group policy to deploy our msi through AD. For some strange reason, when I overwrite the msi with a newer version, and then go to the policy, and click on "Redeploy Application", the application gets uninstalled on the users' machines, and all reg keys, binaries and shortcuts are gone from them. The "Add/Remove Programs" still contain the application entry. I have managed to create a minimal vdproj that does nothing but write its current Product Version to a registry key, and created two versions of it (1.0.0 and 1.1.0). I still face the same problems when using this msi in my AD environment. I did check that my Package Codes and Product Codes are different for both versions, and that the Upgrade Codes are identical. I also checked the RemovePreviousVersion to true. Checking with some other msi (firefox 3.0.0 and 3.6.3) I downloaded from a site specifically for AD deploy, it worked just as expected (first installing the 3.0.0, then I over-written the msi, and clicked on "Redeploy", and the users got 3.6.3 after the next log-off-log-on). What am I missing here?

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Gmail won't forward mail sent to myself.

    - by BHare
    I own a dedicated server with a domain, we'll say foobar.com. I use google apps to manage my email SMTP servers. Now I don't check two gmail inboxes. I have my own personal one, and then I have foobar.com's inbox from google apps. Naturally the easiest thing to do is just have all foobar's emails forwarded to my personal one. So then I am only checking 1 inbox. This is all fine and dandy. I use MSMTP that with a wrapper that uses /etc/aliases. I have it set so any mail attempting to go to root (Things from cron, etc) will go to [email protected]. So when google app's (foobar.com) gets an email from the email I have setup with it ([email protected]), it automatically doesn't forward the message. This is a "feature" to gmail/google apps I suppose. How do I get around it? workarounds? etc. I could just have my alias set to my personal email but I wanted a place to have all foobar related emails archived in one place (googleapps).

    Read the article

  • Need advise for choosing software\hardware for virtualization.

    - by Anatoly
    Currently we have these servers : Windows SBS 2003 premium on IBM X266 double Xeon F43, 2GB ram. DC, exchange (70 users), Mssql. Windows 2003 R2 32bit on IBM x3400 with double XEON E5310 and 4GB ram. Terminal server (40+ users), ERP application based on uniPaaS platform from Magicsoftware, and Pervasive sql. Ubuntu 8.04 (simple pc box) with squid proxy, GLPI system and PHPBB3 forum for internal use. Recently number of concurrent users on Terminal server passed 40 users in rush hours and it gets stuck frequently. Therefore we need an upgrade. I think about transfer all physical servers to virtual servers based on cluster of 2 physical servers for reducing downtime. I think we will grow till 50-60 concurrent terminal users in rush hours. I also plan to virtualize 10-15 Win XP/7 workstation (office,ERP etc), and there is a little probability for Asterisk\Hylafax for 100 users (if it possible on same VM). Also we need NAS storage for 2-3TB. What hardware upgrade/purchase we need for complete this task? Which VM solution is preferable VmWare or Hyper-V? What backup software should we choose? Acronis or something another? Thank you in advance.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by Trevor Tippins
    Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • Looking for way to log process terminations on OS X (Mac)

    - by Stan Sieler
    I'm looking for a way to log all process terminations on my Mac (OS X 10.6.8). (And see pid, timestamp, process name) I've implemented something similar for HP-UX, but it required a kernel-level driver and intercepting several variations of "exit()" (the normal one, and the one invoked on behalf of a process while it's aborting). Why do I want the info? I've been seeing messages in my system log file (dmesg) like: CODE SIGNING: cs_invalid_page(0x1000): p=91550[GoogleSoftwareUp] clearing CS_VALID CODE SIGNING: cs_invalid_page(0x1000): p=92088[GoogleSoftwareUp] clearing CS_VALID Although dmesg lacks timestamps, apps/Utilities/Console : Database : all : search for CS_VALID shows that the messages appears about once every 58 1/2 minutes. I suspect the number after "p=" is a process id (pid) ... but for a process that has long since terminated by the time I see the message. So, if there was a process termination log mechanism that recorded the pid, the time of termination, the reason for termination, and the process name (at time of termination), that would probably allow me to determine who's causing those errors to be logged! (No, I'm not running Chrome on my Mac, and "ps -ef | grep -i goog" gets no hits either ... I'm not consciously running any Google apps on the Mac) thanks, Stan [email protected]

    Read the article

  • Extending a home wireless network using two routers running tomato

    - by jalperin
    I have two Asus RT-N16 routers each flashed with Tomato (actually Tomato USB). UPSTAIRS: Router 'A' (located upstairs) is connected to the internet via the WAN port and connected via a LAN port to a 10/100/1000 switch (Switch A). Several desktops are also attached to Switch A. Router A uses IP 192.168.1.1. DOWNSTAIRS: I've just acquired Router 'B' and set it to IP 192.168.1.2. I have a cable running from Switch A downstairs to another switch (Switch B). Tivo, a blu-ray player and a Mac are connected to Switch B. My plan was to connect Router B to Switch B so that I have improved wireless access downstairs. (The wireless signal from Router A gets weak downstairs in a number of locations.) How should I configure Router B so that all devices in the house can see and talk to one another? I know that I need to change DHCP on Router B so that it doesn't cover the same range as DHCP on Router A. Should I be using WDS on the two routers, or is that unnecessary since I already have a wired connection between the two routers? Any other thoughts or suggestions? Thanks! --Jeff

    Read the article

  • How can something relevant to graphics completely kill a motherboard?

    - by leladax
    I was coding something in OpenGL and after a bug there was an 'OS slowed down' situation. After a few seconds the screen went blank and the laptop shutdown. Now not even a led turns on battery or not. It doesn't appear to be the AC or the battery since there was some battery when it died and when it's connected to the AC the laptop produces near the AC connection a very slight 'clicking' noise (very faint, one has to be very careful to notice it, I don't know if it was there forever tbh). I suspect the motherboard died, as in something from the point it gets AC or battery power and the point it actually feeds itself. But I can't figure out how that effect was produced by the OpenGL bug or graphics overheating. If the graphics died alone, it should at least give some indication that the laptop is barely alive, at least a led, a sound, anything, the laptop is instead completely dead (other than faint 'clicking' I mentioned). Does anyone have expert advice on this? I'm especially interested in any ideas connected to "graphics overheated/bugged ---- they killed motherboard". I have a very lengthy experience in that stuff as a hobbyist and it really puzzles me. It's not just a "AC died" situation I can easily google.

    Read the article

  • Delegating account unlock rights in AD

    - by ewall
    I'm trying to delegate the rights to unlock user accounts in our Active Directory domain. This should be easy, and I've done it before... but every time the user tries to unlock an account (using the LockoutStatus tool), he gets denied with the error "You do not have the necessary permissions to unlock this account." Here's what I've done: I created a domain local group and added the members who should have the rights. This was created over a week ago, so the users have logged out and in again. In ADUC, I've used the Delegate Rights wizard on the OU which contains our user accounts to grant permissions to Read lockoutTime and Writer lockoutTime to the group, per MSKB 279723 I have double-checked the permissions were applied correctly in ADSIEdit. I have forced replication between all domain controllers to ensure the permission changes were copied over. The user testing it has logged out and in again to ensure he has any changes applied to his account. ...That covers all the bases I can think of. Anything else I could be missing?

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • Winamp has slow /skipping video playback on Windows 7

    - by Roy Rico
    Hello, I have Windows 7 x64 (7600 90-day trial version) and Winamp 5.6 installed. When I play a video in Windows Media Player, the video plays smooth, however when I play a video in winamp, the video is mostly ok when played back at the original size (but not completely), but if I play it back in fullscreen, the playback gets really slow. The video's audio track plays just fine. I have a DELL XPS 420 computer (8GB of RAM) with a Nvidia GeForce 8800 CTS 512 video card. I've updated to the latest drivers. I have the default Windows 7 codecs, and the CCCP codec pack which used to be all I needed under Windows XP to play all types of videos. Are the codecs needed for Windows Y the same? What's going on? UPDATE: As suggested, I turned off Aero and winamp ran just fine again. So I just have to wait for winamp to be rewritten to work with the way Vista/Windows 7 runs? UPDATE 2: Winamp has updated their player, and it works great with Windows 7 now.

    Read the article

  • Multiple servers vs 1 big server performace

    - by pistacchio
    Hi to all! My team of developers has suggested a server structure for an upcoming project we are developing. Our structure is "logical", meaning that the various logical components of the application (it is a distributed one) relies on different servers. Some components are more critical than others and will be subjected to more load. Our proposal was to have 1 server per component but the hardware guys suggested to replace the various machines with a single, bigger one with virtual servers. They're gonna use Blade Servers. Now, I'm not an expert at all, but my question to the guys was: so if we need, for example, 3 2GHz CPU / 2GB RAM machines and you give me 1 machine with 3 2GHz CPUs and 6 GB of RAM it is the same? They told me it is. Is this accurate? What are the advantages or disadvantages of both the solutions? What are the generally accepted best practices? Could you point out some URL reference dealing with the problem? Thank you in advance! EDIT: Some more info. The (internet / intranet) application is already layered. We have some servers on the DMZ that will expose pages to the internet and the databases are on their own machines. What we want to split (and they want to join) are some webservers that mainly expose webservices. One is a DAL that communicates with the database layer, one is our Single Sign On / User Profile application that gets called once per page and one is a clone of what seen on the Internet to be used on our lan.

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • No password is complex enough

    - by Blue Warrior NFB
    I have one user in my AD domain who seems to not be able to self-select a password. I may have another one, but they're on a different enough password-expiration schedule that I can't remember who it is right now. I can set a password via ADU&C just fine, but when he tries it via C-A-D he gets the "doesn't meet complexity" message. Figuring he was just doing something like 'pAssword32', I did some troubleshooting of my own and sure enough it doesn't want to take a password that way. He's one of our users that habitually uses a local account and then maps drives using his AD credentials so he doesn't get the your password will expire in 4 days, maybe you should change it prompts, so he's a frequent "my password expired, can you fix it" flyer. I don't want to keep having him set it via ADU&C over my shoulder every N days. I'm just fine setting temp passwords of 48 characters of keyboard-slamming and letting him change it something memorable. My environment is at the Windows 2008 R2 functional level, and I am using fine-grained password policies. In fact, I have two such policies: For normal users (minimum length, remembered passwords) For special utility accounts The password complexities I've tried match both policies for length and char-set selection. The permissions on the User object themselves look normal, SELF does indeed have the "Change Password" right. Is there some other place I should be looking for things that can affect this?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >