Search Results

Search found 9416 results on 377 pages for 'dont repeat yourself'.

Page 221/377 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Windows 2003 Domain Controller Very Upset about NIC Teaming

    - by Kyle Brandt
    I set up BACS (Broadcom Teaming) to team two NIC on a Windows 2003 Active Directory Domain Controller. Networking still works okay, I can ping the gateway etc, but both DNS and Active Directory fail to start with various 40xx errors. The team that I created is Smart load Balancing with Failover, with one backup and only one in smart load balancing (So really it is just failover). I have the team the same IP address that the single active NIC had before. Anyone seen this before, or have any ideas what the problem might be? Event Type: Error Event Source: DNS Event Category: None Event ID: 4015 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: DNS Event Category: None Event ID: 4004 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server was unable to complete directory service enumeration of zone .. This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: NTDS Replication Event Category: DS RPC Client Event ID: 2087 Date: 3/7/2010 Time: 10:40:28 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: ADC Description: Active Directory could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory from replicating between one or more domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.

    Read the article

  • Why am I getting a warning that windows is logging on with a temporary profile to run a task scheduler task?

    - by Dan C
    I am having a strange problem with the Windows Server 2008 Task Scheduler. I have to run a small command-line application every few minutes. This application just executes a quick web service call on the localhost and adds an entry to a log file; so it should not need anything special in terms of permissions. First, I created a new user account "my_scheduler" just for the task. This account is a member of the Users group (not sure what other settings I should turn on/off) and set it's password to not expire. I then create a task to run the application every few minutes. I set it to "Run whether user is logged on or not" and turned on "Do not store password. The task will only have access to local resources" (I did this since it's not hitting anything on the network. I did not turn on "Run with highest privileges" since it does not seem to need them. I set the schedule to "After triggered, repeat every 30 minutes for a duration of 1 day" and "Allow task to be run on demand" (no other settings enabled). However, I notice that in the Event Log, I see a bunch of these warnings whenever the task is run: "Windows cannot find the local profile and is logging you on with a temporary profile. Changes you make to this profile will be lost when you log off." Even though I get the warning, the task is executing (I see the log entries appearing). Another (possibly related) issue is that I also see that it's starting multiple copies of the task (within a few seconds of each other) even though it should only start one. This is also a big problem. Any idea how I can fix this? Thanks in advance, Dan

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • Backup program for Windows using non-proprietary format?

    - by Cristi Diaconescu
    I'm looking at the various local backup programs for windows, and I was wondering which of them use a non-proprietary backup format? By non-proprietary, I mean I want to be able to access at least the latest version of the backed up files either directly, or by using an open-standard format like zip/7z/rdiff... The other thing I'm looking for in a backup program is the ability to create incremental backups. What I have found so far: SyncBack copies files as-is, using separate directories for versioning pretty much the same for all the 'roll you own' task scheduler + rsync/xcopy32/robocopy/MS SyncToy/etc solutions GFI Backup appears to be using Zip files, at least in their 'Business' version, not sure about the free 'Home' version. Didn't try it yet, but it's next on my list. Mozy (!) supports local backup starting with v 2.0 and basically provides a 2nd local copy on a separate partition. Subjectively, it feels slow and resource intensive (I think it took more than a week to finish the first local backup of ~ 300 GB), and does not appear to offer file versioning (arguably, you can get older file versions online). On the positive side, it looks like the local backup is integrated in the restore process which was traditionally a masochistic experience (and this goes for any online backup provider). Other suggestions? I favor ease of use over tons of options (e.g. SyncBack is very flexible but it offers sooo many ways to shoot yourself in the foot...)

    Read the article

  • Performance variation

    - by Ree
    During my time spent working with multiple machines, I have noticed that performance of the same machine doing the same tasks in the same order differs and sometimes the difference is big enough to be noticeable. This applies to all the machines I've owned and/or maintained (old and modern). Some examples (many of them you may have noticed yourself) that sometimes are completed in different time frames: POST OS installation Hardware tests and operations (usually executed within a customized OS such as one of the many DOS variants), HDD tests and "low level" formats Software installation or other tasks (such as benchmarks) within a general purpose OS (Windows, Linux, etc) I can imagine this is caused by the fact that a machine is built with many components having to communicate as a whole and since the mechanical and electronic parts aren't perfect the overhead occurs. In the last example, I assume the OS complexity and concurrently running multiple processes has some additional effect as well. However, I'm wondering if this hardware imperfection and overhead is indeed that high to be humanly noticeable? Maybe there are other factors that are influencial as much or even more? So, in short - why? To emphasize: the difference is noticeable on the same machine performing the same tasks and this applies to ANY machine in my experience. I'm not comparing machine to machine performance.

    Read the article

  • Openpgp does not work in my Thunderbird-Installation

    - by zerozero
    Hello community, as mentioned above - i encountered serious troubles. here are the versions of the related software: SuSE 11.2, Thunderbird v.3.1.6, released October 27, 2010, Firefox 3.6 v3.6.12 I created an installations with an own partition both for the user and for the TB-Mails. For a new installation on another hardware i took these partitions. When i wanted to read the emails, i got an error-message like this: The GPG-agent for your GnuPG-version 2.0.12 couldn´t get started Further i got an error message for the access on services of enigmail. The file jar:file:///usr/lib/mozilla/extensions/{3550f703-e582-4d05-9a08-453d09bdfdc6}/{847b3a00-7ab1-11d4-8f02-006008948af5}/chrome/enigmail.jar!/locale/de-DE/enigmail/help/initError.html couldn´t get found. I found out, that this path doesn´t come from TB nor from firefox, but from enigmail. I installed several (un-)pack-programs. the only effect was, that in the menu of TB the entry for OpenPGP appeared in the TB-Menu. The errors as described above repeat at every try to read an email. I deleted and re-installed enigmail, but the errors dont´t disappear. What can i do to get rid these error-messages ? Thanks in advance

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • How to count the most recent value based on multiple criteria?

    - by Andrew
    I keep a log of phone calls like the following where the F column is LVM = Left Voice Mail, U = Unsuccessful, S = Successful. A1 1 B1 Smith C1 John D1 11/21/2012 E1 8:00 AM F1 LVM A2 2 B2 Smith C2 John D2 11/22/2012 E1 8:15 AM F2 U A3 3 B3 Harvey C3 Luke D3 11/22/2012 E1 8:30 AM F3 S A4 4 B4 Smith C4 John D4 11/22/2012 E1 9:00 AM F4 S A5 5 B5 Smith C5 John D5 11/23/2012 E5 8:00 AM F5 LVM This is a small sample. I actually have over 700 entries. In my line of work, it is important to know how many unsuccessful (LVM or U) calls I have made since the last Successful one (S). Since values in the F column can repeat, I need to take into consideration both the B and C column. Also, since I can make a successful call with a client and then be trying to contact them again, I need to be able to count from the last successful call. My G column is completely open which is where I would like to put a running total for each client (G5 would = 1 ideally while G4 = 0, G3 = 0, G2 = 2, G1 = 1 but I want these values calculated automatically so that I do not have scroll through 700 names).

    Read the article

  • Linux error when resume from RAM

    - by TuxPotato
    The last two times that I have resumed my laptop from sleep, it has hung and given me this set of errors: [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 hda_intel: azx_get_responce timeout, switching to single_cmd mode: last cmd=0x01170700 ata6: softreset failed (device not ready) ata4: softreset failed (device not ready) [drm:atom_op_jump] *ERROR* atombios stuck in loop for more than 1sec aborting [drm:atom_execute_table_locked] *ERROR* atombios stuck executing E692 (len 460, WS 0, PS 4) @0xE6D3 The last two messages repeat two more times. The first time this happened, Linux's Magic SysRq worked and did a soft reboot, and after that everything was fine till it went to sleep again. It wakes up and gives me this. Here are the laptop stats: Toshiba Satellite L455D-S5976 AMD Sempron SI-42 Processor 2GB DDR2 RAM HD TruBrite Display ATI Radeon Graphics (integrated) Running Ubuntu 10.10 32 bit I'm not sure about the hard drive, but its a 250GB drive with one NTFS partition, two Hidden NTFS, one Linux Swap, and one Ext4. Can someone tell me whats wrong with my laptop? NOTE: This only happens when I close the screen. My computer doesn't go to sleep with the screen open.

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • Create Windows AMI with instance storage

    - by Jonathan Oliver
    I have a business use case and workflow where local/instance/ephemeral storage for an EC2 instance is ideal. Unfortunately I'm coupled to a Windows platform for this particular task and the EC2 Windows offering appears to have some deficiencies related to AMI creation. In essence, I'm trying to figure out if there's a way to attach local instance storage to a Windows EC2 instance using the typical command line interface (because the Amazon Website GUI doesn't support it) and then to somehow create an AMI based upon that. I've tried creating a snapshot and then creating a Windows AMI based upon the snapshot, but of course the docs say this is unsupported and makes an unbootable AMI. In short, here's what I'm trying to do: Be able to run a Windows instance (EBS/S3 instance doesn't matter) Attach local instance storage as drive D: Persist that configuration as an AMI such that I can start lots of them as necessary from either the GUI, command line, or REST API. Be able to take a launched instance, update software, shutdown, and create another AMI based upon that. Wash, rinse, repeat. One other potential option which isn't horrible, but isn't ideal is to create an AMI which has 2 EBS volumes already attached (system+apps and data). Essentially, every time I startup an instance based upon the AMI it'll create 2 new EBS volumes of pre-determined size. I'm trying to avoid that scenario if possible.

    Read the article

  • Windows Media Player 12 Library import keeps dying

    - by duckworth
    I cannot get WMP 12 to import my library. I have searched around various forums and tried all the common solutions like disabling Media Sharing, deleted my %LOCALAPPDATA%\Microsoft\Media Player directory and tried reimporting, etc. but nothing works. I have even removed the Media features from Windows setup and re-added them. I have a large mp3 collection shared on the network from another Windows box. I add the folder (tried as a mapped drive and UNC path) and it begins importing. After about 30 minutes into the import (the CurrentDatabase_372.wmdb hits just under 400MB) my WMP player stops importing and all of the icons in WMP turn to red x's and my library is gone. I close and reopen WMP 12 and the library is empty and the CurrentDatabase_372.wmdb is small and it strarts importing again. Rinse, lather, repeat. I am going nuts as WMP11 on Vista handles this same setup perfectly. I am at my wits end on what else to try. I am running a legit Windows 7 Ultimate X64 RTM install. Here is a screenshot of what WMP12 looks like when the import dies: Any other ideas? Edit: OK, I Just confirmed this is definitely a problem not specific to my computer or configuration. I just did a clean installation of Windows 7 Ultimate x86 on an old test machine, opened WMP12 and added the same network folder of mp3's and it crashed about an hour into the import with the same appearance as the screenshot I posted above and the library disappears. So the problem has to be one of several things: The large size of the library The fact that the library is on the network A specific file or file is causing it the player to crash

    Read the article

  • Songbird looping my library playlist

    - by Logi
    I'm using Songbird (on Windows) as my iTunes replacement. There's two things that annoy me about Songbird: 1) It has a web browser built in. I can ignore this as I don't use it. But, seriously, a web browser? :/ 2) It loops my library. This, I don't like. The way I play my audio is by launching Windows Explorer, navigating to my media library and double clicking on a single MP3. I only want to hear that one MP3. I don't use the Media Import function in Songbird - Or any other media player. But it insists on adding every song I play to the Library. Disabling this would be excellent but, falling short of that, I'd at least like to stop it repeating every song in the library. When Songbird finishes playing the last song in the Library (which is the song I just opened via Windows Explorer (gets appended to the library)), it jumps to the top of the library and starts playing all the songs again. How can I stop this? Songbird is set to 'Repeat none'

    Read the article

  • Batch file reads in text file and replaces quotes with variables into new file?

    - by John
    I have customer Pizza lists that i have saved into 5 seperate txt files from my database in this format: Filename = 25Percent.TXT "555-1211" "555-1212" "555-1223" ... ect Each list is a phone number in quotes and each list varies in length. Part 1: I have two sets of variables that i would like to replace with each quote in the 5 text files. The two variables would be like: Var A = < Discount Pizza price for phone number is " Var B = " 25 % So i would like to run a batch file that reads each line in the text file and writes into another text file the following, replacing the quotes with the variables: New Filename = 25Percentfinished.TXT < Discount Pizza price for phone number is "555-1211" 25 % < Discount Pizza price for phone number is "555-1212" 25 % < Discount Pizza price for phone number is "555-1223" 25 % Then I would repeat for 30percent.txt, then 35percent.txt, 40percent.txt, and then finally 50percent.txt file. Part II: I would also like to append the 5 new files together, with the append command? I am assuming that the SET Var command would also be used? Not sure what to do here.

    Read the article

  • Makecert.exe hangs

    - by Robert
    I was following the steps in Scott Hanselman's blog post describing how to create a certificate authority and code signing certificate for PowerShell scripts. Initially, I created the certificate authority and a personal certifcate and used it to sign a powershell script successfully. All went as described in the blog post. The problem starts (as most do) when I did something that was (probably) stupid, although it seemed reasonable at the time. I wanted to start over and repeat the process again with a clean slate, so from the mmc certificates snap-in console, I deleted the personal certificate and the certificate authority I created previously. After that any time I try to use makecert, (just as I did the first time around), makecert either hangs or faults (which prompts to end or debug). Did I hose something up by deleting via the certificates snap-in? It didn't complain or warn me that it could be potentially hazardous. Is this just coincidence and something else entirely could be hosed? I have Event Log entries from the times when makecert crashed, which all look very similar; here is one: Log Name: Application Source: Application Error Date: 8/5/2009 3:55:04 PM Event ID: 1000 Task Category: (100) Level: Error Description: Faulting application makecert.exe, version 6.0.6000.16384, time stamp 0x4545910b, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e03821, exception code 0xc0000005, fault offset 0x00067409, process id 0xe58, application start time 0x01ca160efdf30625. Anyone have any ideas as to what exactly caused this and/or what I can do to fix it. I'm on 32-bit Vista Enterprise w/SP2.

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Files not being copied to AFP volume when copying through the Finder

    - by cefstat
    I am trying to copy files from my Macbook's hard disk to my NAS. The latter is a ReadyNAS Duo and is mounted as an AFP volume. The files are about 5MB each and I copy them by selecting in a Finder window all the files that I need and then dropping them onto the destination directory. Almost always some of the files do not get copied to the NAS. For example, if I select 200 files and then start the copying, everything looks at the beginning normal (while the copying takes place the Finder window for the destination directory is updated to show 200 files while it was empty before), but after the copying ends the destination directory shows less than 200 files (let's say 190). If I copy again the same 200 files to the NAS, without replacing already copied files, the remaining 10 files are usually copied correctly. In a few cases, I have to repeat the process a third time. Notice that the Finder does not give any warning that some of the files have not been copied at any stage. I am wondering if this a known problem with AFP and the Finder and/or if there is something that I can do to solve this problem.

    Read the article

  • Issues with sustained traffic with PFSense

    - by Farseeker
    Last week we had to replace our PFSense firewall because it had a catastrophic hardware failure. All but one of the NICs were taken out of the old server and put into the new one. The one NIC that was not moved was the LAN NIC as this is on-board. The other NICs are all WAN connections and the must all be present (i.e. I can't disable one just for the sake of testing) After re-installing PFSense and restoring our backup of the configuration, everything came back online just fine, however on the new hardware any download that takes longer than about 10 seconds just times out in the middle. Example 1: Downloading from Microsoft.com goes at about 900k/sec and times out after about 10 seconds (thus, just under 10Mb of content) Example 2: Downloading from cnet.com goes at about 300k/sec and times out after about 10 seconds (thus, about 3Mb of content). By times out, I mean that the download just stops, and you have to pause/resume to get the next part done, repeat and rinse until the download is complete. However it's not consistant, sometimes it's 10 seconds, sometimes it's 4 seconds, and it sometimes you can't even load a heavy HTML page because the page never finishes. I assume this is most likely because PFSense does not like the onboard NIC, as this is the primary difference between the two servers. It's recognised as NFE0, and there's no room in the server for any more NICs and I don't have any dual-port NICs handy to experiment with a different LAN connection. I've never had to troubleshoot this sort of issue before. Can anyone give me some pointers about where to start? Linux is not my forte so please be kind!

    Read the article

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • Can I disable chrome's auto translate function for visitors to a given page on my site?

    - by Drew Noakes
    I know how to disable this feature for pages that I visit, but what I'm looking for is a way to tell other user's Chrome browsers not to offer translation a particular page on my site. Is there some kind of meta tag I can use? Alternatively, can I indicate that a particular element on the page should not be translated? Reason: The controls which slide down from the top of the page cause my page to resize, which changes the content, which makes the control slide up, which resizes the page, which changes the content, which causes the controls to slide back in. Rinse and repeat. The page dances. The page itself is a map, and the content it wants to translate are all proper names and shouldn't be translated anyway. If alternate names exist in other languages, I provide them myself. Generally I'm against taking away features from the browser that users might like, but in this case it really makes sense. So please don't answer saying that I shouldn't be doing this. I've weighed up the options.

    Read the article

  • Possible DNS issue?

    - by durilai
    I am having an issue, which I think stems from DNS. I have 2 servers. Server 1 is AD server with DNS, which was automatically configured when installing AD. The second server is a web server that is part of the domain, but it is not AD nor any other role. I can remote desktop in from server 1 using internal IP address, but when I attempt to connect from any other computer it fails, the computer can connect to server 1. I am able to ping both servers, as well as nslookup both using their FQDN. I am also able to telnet to port 3389. Any help is appreciated UPDATE I do not think it is DNS anymore, but not sure what it is. The remote desktop connects and I get to the login prompt, but when I start to enter credentials it disconnects. I then am unable to reconnect. If I wait for about 10 minutes it will allow me to repeat, but with the same results. UGH!!!

    Read the article

  • DNS Resolution doesn't work after uninstalling Cisco VPN & Deterministic Network Enhancer in Win 7

    - by Craig M
    I just upgraded my home PC to Windows 7 Ultimate 32 bit. After trying various methods to get the Cisco VPN client to work, I gave up and decided to just run it in XP mode. The last steps I tried were in this article ( http://social.technet.microsoft.com/Forums/en-US/w7itproappcompat/thread/d880dfe5-7f44-4955-8620-2a9355d8ea8b/ ) After that, I uninstalled the Cisco client and rebooted. I uninstalled the Deterministic Network Enhancer and rebooted again. Both uninstalled successfully, but now I'm not able to resolve any DNS. The only way I can resolve DNS is to reinstall the DNE, reboot, and uninstall the DNE. Then I am able to resolve DNS lookups until I reboot again. Once it's rebooted, no more DNS. Any ideas? Edit: I completely forgot I'd asked this question until harrymc posted his answer. I've since found out that to fix this problem, I need to disable my Local Area Connection and re-enable it. Once I do that I have no trouble making network connections until the next time I reboot at which point I repeat the process. It's annoying, but manageable since I reboot very infrequently.

    Read the article

  • Are Colocation Cross Connects Worth While

    - by SvrGuy
    We currently operate three clusters of collocated machines in different data centers. Recently, I became aware that our newest data center will offer to cross connect us to a bandwidth provider free of charge. In the past, I never really investigated a cross connect for bandwidth because I figured that the rates would be similar to what we are paying the colo now and that it would reduce our resiliency (because we would only be using one or two carriers for IP, where as the colo uses, say 8 different providers). Then I saw an ad for hurricane electric internet services (http://he.net/cgi-bin/ip_transit_quote) that gave a price for IP transit at $1/Mbs, which is much better than the $30/Mb we pay for the blended bandwidth. What are people out there typically paying for bandwith via cross connect and how hard is to setup? Is my understanding that what you do is open agreemetns with two or three ISPs, cross connect to them and then configure your top of rack router on their network. Can you really get IP transit down to a couple of dollars per megabit per month just by doing the routing yourself? Or, is my understanding of cross connection fundamentally wrong?

    Read the article

  • Automatically Kill/Restart Process(es) When Memory is Critically Low

    - by nemesisfixx
    I have a Debian Wheezy VPS box where am running a couple of Django apps in production. Ideally, would have tried addressed my current memory footprint issues by optimizing the apps, adding more RAM or augmenting with Swap. But the problem is that I doubt there's much memory optimization I'd milk from optimizing the Django apps (the stack being open-source and robust), and adding RAM is a cost constraint for me (this is a remote VPS), also, the host doesn't offer options to use Swap! So, in the meantime (as I wait to secure more resources to afford more RAM), I wish to mitigate the scenarios where the server runs out memory so that I just have to request a VPS restart (as in, at that point, I can't even SSH into the box!). So, what I would love in a solution is the ability to detect when a process (or generally, total system memory usage) exceeds a certain critical amount (for now, example the FREE RAM falls to say 10%) - which I've noticed occurs after the VPS's been up for long, and when also traffic is suddenly much to some of the heavy apps (most are just staging apps anyway). So, I wish to be able to kill/restart the offending process(es) - most likely Apache. Which solution when done manually in these situations has restored sane memory usage levels - a hint that possibly one or more of the Django apps has a memory leak? In brief: Monitor overall system RAM usage When FREE RAM falls below a given critical threshold (say below 10%), kill/restart the offending process(es) - or simpler, if we assume from my current log analysis (using linux-dash) that Apache is often the offender, then kill/restart it. Rinse and repeat...

    Read the article

  • Routing a single request through multiple nginx backend apps

    - by Jonathan Oliver
    I wanted to get an idea if anything like the following scenario was possible: Nginx handles a request and routes it to some kind of authentication application where cookies and/or other kinds of security identifiers are interpreted and verified. The app perhaps makes a few additions to the request (appending authenticated headers). Failing authentication returns an HTTP 401. Nginx then takes the request and routes it through an authorization application which determines, based upon identity and the HTTP verb (put, delete, get, etc.) and URL in question, whether the actor/agent/user has permission to performed the intended action. Perhaps the authorization application modifies the request somewhat by appending another header, for example. Failing authorization returns 403. (Wash, rinse, repeat the proxy pattern for any number of services that want to participate in the request in some fashion.) Finally, Nginx routes the request into the actual application code where the request is inspected and the requested operations are executed according to the URL in question and where the identity of the user can be captured and understood by the application by looking at the altered HTTP request. Ideally, Nginx could do this natively or with a plugin. Any ideas? The alternative that I've considered is having Nginx hand off the initial request to the authentication application and then have this application proxy the request back through to Nginx (whether on the same box or another box). I know there are a number of applications frameworks (Django, RoR, etc.) that can do a lot of this stuff "in process", but I was trying to make things a little more generic and self contained where different applications could "hook" the HTTP pipeline of Nginx and then participate in, short circuit, and even modify the request accordingly. If Nginx can't do this, is anyone aware of other web servers that will perform in the manner described above?

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >