Search Results

Search found 29426 results on 1178 pages for 'user99572 is fine'.

Page 133/1178 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Word 2010 does not save as Word 2003 XML

    - by Peter
    I have a document which was created in Word 2010, but for use in a particular application, it needs to be saved in Word 2003 XML format. When I try the normal "Save as" via the File menu (choosing Word 2003 XML format to save as), Word 2010 thinks for a while, and then presents the "Save as" dialog to me again, suggesting that I save the document as .docx. Trying to get around this, I saved the document as .doc (i.e. Word 97-2003 document). This worked fine. But when I try to save this .doc file as Word 2003 XML, again Word 2010 thinks for a while, and then presents the "Save as" dialog, suggesting this time that I save the document as .doc. Oh, and I need to say that this only happens on a specific document - all others work fine. I know I should try a process of elimination and see what is causing the symptoms, but it would nice to have an answer "in principle". Is there perhaps a setting somewhere that I have enable? Does anyone know what's going on here?

    Read the article

  • How do I prevent my ASP .NET site from continually prompting for user credentials?

    - by gilles27
    I'm trying to get an ASP .NET website up and running on IIS6. The site will run in its own application pool, and uses Windows authentication, with anonymous access turned off. When I run the app pool under NETWORK SERVICE, everything works fine. However we need the app pool to run under a different account, because this account needs some extra privileges (we are printing Word documents). This new account is a member of the local users group, and the IIS_WPG group. It has also been granted the "Log on as a service right". When I browse to the site I am prompted for credentials, not once, but several times. When the page finally loads it looks wrong because the style sheets have not been applied. My suspicion is that I am being prompted once for each file (e.g. all the images, styles and script files) the browser requests, and that for some reason the website is unable to validate those credentials in order to serve the files back. If I allow anonymous access the page loads fine - we don't want to allow it but I mention it in case it offers any further clues. My theory is that perhaps the account the app pool runs under needs permissions to validate domain credentials? If that is so, how do I enable this?

    Read the article

  • Intel Ethernet Bottlenecking Internet?

    - by Donald Darma
    I'm having trouble with my internet speeds. So I just recent build a pc and everything is fine. I installed the Intel drivers and connected to the internet. It connects but I'm only half the speed I should be getting. My normal speed is 20mbps but speedtest.net is only showing 10. It can't be my ISP (which is TWC if anyone is asking) because my other devices like my laptop and my smartphone are showing 20 down. Heres my system: CPU: i5 4430 HSF: Stock cooler Mobo: Gigabyte Z87MX-D3H GPU: x2 MSI R7950-3GD5/OC BE RAM: Crucial Ballistix Tactical Tracer 8GB dual channel PSU: Silencer High Performance Power Supply 750 Watt 80+ (It's a subdivision of OCZ) HDD: Seagate Barracuda 7200RPM 3TB SSD: Samsung 840 Evo 120 GB Case: Corsair Obsidian 350D Edit: I am using the stock adapter that is on the motherboard. I know for a fact that the cable is good because I used it on my laptop and it ran fine. Its a CAT5E cable. I also ran IPERF and its giving me the same results, 10 mbps.

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player (however exhibit the same problem in Quicktime). I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I have the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions? Edit Specs on the PC (which I figured was unimportant since everything used to work perfectly): Intel Core 2 CPU 6600 @ 2.4 Ghz Nvidia GTS 8800 Built in realtek-audio soundcard 4GB Ram Codecs which are failing: All that I have tried, but at least Xvid, Mpgv (mpeg2 video from a camera), and Wmv (only kinds that I have ready access to).

    Read the article

  • Server not accepting uploads

    - by Tatu Ulmanen
    I'm having a strange problem with my VPS: I can download files from it, I can use PuTTy to connect to it and all behaves normally. But sometimes, when I try to upload a file to the server or save a file via SFTP, the connection inexplicably fails. I am using jEdit to edit files remotely via SFTP. When it works, it works fine. When it doesn't, I get an error message: Cannot save: java.io.IOException: inputstream is closed Cannot save: java.io.IOException: 4: I can see that a temporary save file (#file.php#save#) is created on the server with a filesize of 0. So the connection works, but when it comes to sending the actual data, something fails. The same thing with WinSCP, but the error is different: Copying file fatally failed. Copying files to remote side failed. And I can always browse the server with PuTTy without a problem. I see nothing abnormal in any log files. Auth.log shows this when I try to save: sshd[32638]: Accepted password for - from - port 62272 ssh2 sshd[32638]: pam_unix(sshd:session): session opened for user - by (uid=0) sshd[32640]: subsystem request for sftp sshd[32638]: pam_unix(sshd:session): session closed for user - When I wait for a while (say, an hour), everything works fine again. It can't be a temporary ban, as I am still allowed to connect to the server, right? I know this may not be enough info to solve the problem, but I am grateful for any clues or bits of information that might help me. What are the possible causes for this kind of behaviour, what log files can I check for clues etc.. I'm running out of ideas!

    Read the article

  • CPU temperatures high on new build after gaming

    - by Reznor
    My friend had a problem with his computer a while back. His games were crashing, even within the menus. He was stumped as to what the problem was, so I posted on here requesting help. He found out the day later, when his computer would start up but wouldn't display anything on the screen. His video card must have came screwed up. So, he got a replacement. Now, there's a new problem. His temperatures, which were acceptable before, are now insanely high. His GPU temperature runs 70-80c, which is understandable considering he's running his games maxed out, but the real problem here is his processor and motherboard temperatures. All four of his cores are running at 88-90c after coming out of a game. His motherboard temperature was also 70c at one point. In terms of cooling, his case should definitely be adequate. He has an Antec Twelve Hundred. He's using stock fans. The cable management in his case is very good; better than average. He's using the stock heatsink with the processor too, but note, it was fine before the replacement, so it isn't like there's some inherent problem. He has checked the case too. Everything's fine! No cables in the way. The heatsink is seated properly. He turned his case fans up to high, as well, but the temperatures are persisting. Could the processor be overheating due to running games maxed out? Any ideas?

    Read the article

  • Apache proxy: Why is one vhost returning Forbidden while the other one works?

    - by Stefan Majewsky
    I have a Java application that needs to talk to another intranet website using HTTPS in both directions. After fighting with Java's SSL implementations for some time, I gave up on that, and have now set up an Apache that's supposed to act as a bidirectional reverse proxy: external app ---(HTTPS request)---> Apache ---(local HTTP request)---> Java app This direction works just fine, however the other direction does not: Java app ---(local HTTP request)---> Apache ---(HTTPS request)---> external app This is the configuration for the vhost implementing the second proxy: Listen 127.0.0.1:8081 <VirtualHost appgateway:8081> ServerName appgateway.local SSLProxyEngine on ProxyPass / https://externalapp.corp:443/ ProxyPassReverse / https://externalapp.corp:443/ ProxyRequests Off AllowEncodedSlashes On # we do not need to apply any more restrictions here, because we listened on # local connections only in the first place (see the Listen directive above) <Proxy https://externalapp.corp:443/*> Order deny,allow Allow from all </Proxy> </VirtualHost> A curl http://127.0.0.1:8081/ should serve the equivalent of https://externalapp.corp, but instead results in 403 Forbidden, with the following message in the Apache error log: [Wed Jun 04 08:57:19 2014] [error] [client 127.0.0.1] Directory index forbidden by Options directive: /srv/www/htdocs/ This message completely puzzles me: Yes, I have not set up any permissions on the DocumentRoot of this vhost, but everything works fine for the other proxy direction where I haven't. For reference, here's the other vhost: Listen this_vm_hostname:443 <VirtualHost javaapp:443> ServerName javaapp.corp SSLEngine on SSLProxyEngine on # not shown: SSLCipherSuite, SSLCertificateFile, SSLCertificateKeyFile SSLOptions +StdEnvVars ProxyPass / http://localhost:8080/ ProxyPassReverse / http://localhost:8080/ ProxyRequests Off AllowEncodedSlashes On # Local reverse proxy authorization override <Proxy http://localhost:8080/*> Order deny,allow Allow from all </Proxy> </VirtualHost>

    Read the article

  • Unusable network, packet losses between router and NIC

    - by KáGé
    I have this setup: Gigabyte P35-DS3P motherboard Asus NX1101 PCI network card (the one on the motherboard got fried a few years ago by a power surge) Asus RT-N16 router Windows 7 x64 I think the other specs are irrelevant here, but I'll post them if you say so. Until a week ago everything was fine, but then my network became unusable: websites start loading but timeout before anything would come through (true for the web interface of the router as well), I can't reach the computer from my notebook and Windows' ping utility measures a ~50% packet loss between the computer and the router. Pinging localhost is good. The router works completely fine when wired to my notebook. I also tested different ports on the router, different cables, different router and connecting directly to the modem, but it's still the same. Sometimes it works for a few minutes right after turning on the machine, but then it becomes crap again, but mostly it's useless from the start. I've tried updating the firmware on the router, updating the driver for the network card (after which I started getting BSoDs in every 15 minutes), reinstalling Windows, swapping to Fedora 15 but none of them changed anything. Does this mean that the network card is dying, or could it be something else? If it's the card, what model do you recommend as a replacement? (Could be PCI or PCI-Ex x1) Thanks for your help.

    Read the article

  • Slow connection to Linux MySQL from Windows only (XAMPP)

    - by Josh
    I'm having a problem with a PHP project (using Kohana 3.2 framework) on my Windows 7 64-bit machine connecting to the database. The development database is stored on a Ubuntu Linux server on the local network. Other development machines running OSX and Linux are connecting fine. There are no other Windows development machines to test with. I can access MySQL fine using MySQL Workbench, and other projects (which I believe to be less database heavy) run mostly ok, only occasionally getting timeout messages. I'm constantly getting Maximum execution time of 30 seconds exceeded when functions such as mysql_query() are run in this particular project. Specifically, the Kohana file where the timeout occurs is MODPATH\database\classes\kohana\database\mysql.php [ 186 ]. My local set-up is: Windows 7 Professional 64bit XAMPP 1.7.7 (PHP 5.3.8) The output of uname -a of the Linux server is: Linux peach 2.6.38-11-server #50-Ubuntu SMP Mon Sep 12 21:34:27 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux I've tried the following, with no success: Disabling Windows firewall Switching between using a persistant and normal connection In my.cnf, adding skip-name-resolve Increasing wait_timeout Enabling bind-address I've run out of ideas now, and have no idea how to debug an odd issue like this. Has anyone come across this before, or have any idea how I could find the root of the issue, or what might be the problem?

    Read the article

  • Linux usd disk just create sg device

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Windows Server 2008 - RAID 5 Fails on Reboot

    - by Adam
    Hey, I've got an install of Windows Server 2008 Enterprise. It's running software RAID-5 with five disks. The disks were originally formatted under Windows Server 2003, but came up fine once I installed Windows Server 2008. The issue I'm having is that every time I reboot the server, the RAID comes up with a "Failed Redundancy" - the data stays available. I have 4 disks on a PCI SATA controller, and one of the disks connected to the motherboard's on-board SATA ports. (The other on-board port has the system disk connected.) I was having Disk #4 fail consistently, so I tried swapping the cables on the controller end. I swapped the on-board RAID disk with one on the PCI controller. Same issue now, expect with disk #1. Once the system's up, I can reactivate the RAID, it will resync for a while, then go to "Healthy", and will stay that way for an indefinite amount of time - until I reboot. As soon as I reboot, the disk drops again. I've ruled out disk + cable with the recabling. I don't believe it would be the controller as it seems to work fine most of the time - only failing on reboot, and the other port on the same controller connects the system disk - which is clearly working. I did look in the event log, but didn't see anything particularly relevant (although I didn't know what I was looking for - just looked for anything with a "Warning" or "Error" symbol that looked disk-related :)). I'm not particularly familiar with RAID on Windows, does anyone have any idea why this might be doing this? Any idea how to fix it? Any suggestions appreciated! -- Adam

    Read the article

  • Can't mount Linux usd disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Exchange 2007 with Android activesync

    - by lbanz
    A few of our users noticed that it will stop working intermittently for them. I didn't believe it at first until I changed my android phone and it started occuring for me. It will just stop syncing completely, it looks like the server is blocking the device completely. This mainly occurs when they are using the wifi. I've done some testing. If I switch off the wifi and use the phone data plan it will work fine. When it's on the wifi network, I try and browse to the webmail/owa page and it says page not found! I did a dns lookup and they resolve correctly. If I use another device on the same wifi network, it can access the exchange servers fine. Sometimes the wifi network will just work without any issues. But when it fails, it looks like the phone constantly checks the server every second to see if it is online even though I've got it on manual sync. I was wondering whether it tries to sync too many times and exchange thinks its a denial service attack. My old android phone that works is Froyo and the new one is Icecream. People who have reported issues seems to be newer phones. They also tested their own wifi network at home and experience the same problem. We haven't patch our exchange recently before seeing this problem. Anyone has seen this issue?

    Read the article

  • Device CAL, User Cal or Processor license needed for SQL 2008 (architecture explained inside)?

    - by nycgags
    So we have a number of servers in the Amazon cloud running SQL Server Standard edition to aggregate data. For that purpose we are fine, the licensing is handled by our contract with Amazon, no problem there. For the beefier work, we want to install Enterprise Edition (EE) on our servers processing raw data so that we can take advantage of table partitioning. We currently have 3 servers aggregating data from about 40 node servers, all 43 of these servers are running standard edition which is fine. We also have 4 servers running standard processing the raw data, but I think we can get away with 2 (for redundancy) running Enterprise Edition. We have 2-3 dba's that access these DW servers for maintenance (using the same windows login via remote desktop). So visually: 40 -- 3 -- [2] -- 2 -- 1 nodes -- aggregators -- raw (which we want to run EE) -- calculators -- datawarehouse Nodes PUSH to aggregators, Raws PULL from aggregators, Calculators PULL from Raw, Calculators PUSH to datwarehouse I am specifying the push vs. pull in case that changes how the # of licenses is calculated. Q1) how many device (or user) CAL's do we need? Q2) do I need to speak with someone from MSFT to find out if it is ok to install in the Amazon Cloud (Amazon said we need to verify it is ok in our license terms)? Q3) what happens if another device tries to access a server with the limited number of device CAL's? Q4) Are the device CAL's simultaneous number of devices or total? Q5) Do Device and User CAL's cost the same or is there a difference? Q6) Would we need to buy a processor license (we are hoping not to)?

    Read the article

  • Windows shutting down, CPU maxing out in Windows-7 32-bit? [closed]

    - by Vivek Sharma
    I have no idea, what is happening to my laptop. For last 5-6 times it has automatically shutdown while running, without doing anything serious. And I even do get the message during boot which says - Start windows normal - Start in safe mode It just boots up normal. Few days ago, my screen blinked and it the PC shutdown immediately. After that I was not able to get it started. I got it repaired, the guy said he has replaced the display chip (nVidia-nvs-140), but i seriously doubt that. Now it started working, but would shutdown every 20 mins or so. I have a dual boot, ubuntu-11.10 works just fine. Virus scan on windows shows nothing. I am pasting my perfmon output. My CPU for reason is maxing out to 100% continously, for windows internal processes. Please have a look at the attached file. Strangly now for last 2 hours it is working fine, but i am just writing emails and reading excels. ThinkPad T61 | t9300 | 3gb | nVidia 140 nvs-quadro (latest driver 296.10) What do you suggest?

    Read the article

  • Windows 7 can't find Ubuntu computer by hostname

    - by endolith
    I got a new Windows 7 machine, and was using VNC, SSH etc to connect to my Ubuntu machine, and it worked fine previously connecting to the Ubuntu computer's hostname. Now it doesn't work if I use the machine's hostname, but it does if I use the local IP or DynDNS name. I can also access it from my Android phone using the local hostname over SSH. If I try to connect with SSH to the hostname, it says "Host does not exist". VNC says "Failed to get server address". NX says "no address associated with name", and I don't see it in Windows' "Network" folder. I've rebooted everything. I've turned off Windows firewall. It was working fine a few days ago, but now it's not. How do I figure out what's blocking it? Aha: It probably has something to do with Samba. I reset the Samba configuration the other day, and apparently this can affect it. http://ubuntu-virginia.ubuntuforums.org/showthread.php?t=1558925 I tried commenting out "encrypt passwords = No" as described there, but it still doesn't work.

    Read the article

  • vSphere Promiscuous mode only receiving packets one way from network switch

    - by steve.lippert
    We have two network switches, a POE switch (SwitchA) to power our phones / users computers and a non-POE switch (SwitchB for the rest network.) Each switch is setup to do port mirroring to support our VoIP recording system. SwitchA does port mirroring on specific ports if we need to record a user. SwitchB mirrors one port to monitor our work at home users (Internet comes in from managed router, to switch, back out to our firewall.) These two port mirroring setups feed into one vmware vSphere 4.1 server, it has four total physical cards. The other two NICs feed into an unmanaged switch for connecting to the rest of the network. Once into the vSphere server all network ports go into a vSwitch, and then one of the servers (Windows 2008 R2) sniffs them out and does its thing. Everything is working fine and dandy from SwitchB. But on SwitchA we only receive one side of the VoIP packets (going out to the phone, nothing coming in from the phone). Troubleshooting steps I have taken so far: I hooked up my laptop to the monitor port on SwitchB and I see both sides of the packets. I swapped which network interface is plugged into the monitor port on SwitchA. Because everything feeds into one vSwitch / vNetwork and both sides of the conversation arrive just fine from SwitchB I believe everything is configured correctly on the vSphere server/guest. What could be causing one way packets to arrive on my guest machine from only one interface, but not the other? Could a bad cable be causing the problems from SwitchB?

    Read the article

  • Application Pool Identity corruption

    - by Gavin Osborn
    I have observed a few times while deploying software into IIS that every now and again the related Application Pools fail to restart and in the Event Log we see an error like the following: The identity of application pool, 'AppPoolName' is invalid. If it remains invalid when the first request for the application pool is processed, the application pool will be disabled. This does not happen frequently but when it does the only solution is to re-apply the Identity password in the IIS Manager Window. As soon as I re-apply and then restart the application pool the web sites come back up. Facts: The account is a service account whose password never expires. The account is local to the IIS host. The account password is never changed. This is IIS 6 running on Windows Server 2003 Deployment of the software is via MSI and involves several IIS Resets. The software is created in house and does not do anything fancy to IIS. Any ideas how the identity information might become corrupt? Edit: Clarification To be clear - this user account and password combination work absolutely fine and usually works fine as the Identity of the Application Pool. It is only when we deploy updates of our software into an existing IIS application that it stops working. Our password has not changed Our deployment does not change the password or reconfigure the application pools. This does not happen every time, 1/20 times perhaps. If we re-enter the password into IIS and restart the App Pools everything works.

    Read the article

  • Nvidia GTX 660m crashes games

    - by dcap
    I just recently bought a Lenovo y580 with both HD intel graphics and an Nvidia GTX 660m. It works great except for one thing: playing games. Every time I load a game, either with Steam or Games for Windows Live, games will end up crashing. I've already talked with lenovo tech support and they couldn't help other than send my new laptop for repair which would take 7 days. So before I do that I thought I'd ask around. These are the games I've tested and what happens when they load: Civilization V: Game loads fine but once it gets loaded to the game, there's noticeable "tearing" popping up and certain things flash. Within a minute of this, the game crashes. Does the same thing regardless if Vsync is on or off. Total War Shogun2: Game gets to the menu screen. The background of the menu screen shows what is expected - slideshow of in-game environments rendered on high settings (this is expected). However, within 2 seconds of the menu loading up it crashes. Age of Empires 3 (Non-steam): This game is several years old so it should work on this brand new laptop fine. However the results are similar to that of Civilization V. Noticeable "Tearing" and after a few seconds it'll freeze/crash. I've done tests on all these games with both the latest stable Nvidia driver 285 as well as the nightly build 307. In addition, Nvidia control panel is set on using the dedicated graphics card for all programs. So is there anything I can do to fix this or will I have to send it back for a week to tech support?

    Read the article

  • Can't start Windows 7 after cloning HDD

    - by Paul
    Brief description: cloned HDD1 - HDD2 HDD1 partition 1 boots HDD1 partition 2 boots HDD2 partition 1 boots HDD2 partition 2 doesn't boot Windows, but is bootable in general Now verbosely: In all the cases computer is the same. I have two Windows 7 installations on HDD1 - both are booting fine. I choose between them using standard Windows 7 boot loader menu. Technically there are 4 partitions: 100 MB Boot loader partition (active), Windows 7 copy 1 (25 GB), Windows 7 copy 2 (150 GB) and Working partition. All are primary. In past few days I tried to clone the whole HDD1 to HDD2 of the same size (but 2,5 inch form factor) as is using Minitool Partition wizard. Everything has been copied, all files are accessible, no faults in file system structure, even boot loader wasn't damaged and I hadn't to repair it. But I can boot only first installation of Windows 7 (it boots without issues). When I choose the second installation, I get immediately a completely black screen without any texts, cursors and other data. HDD isn't accessed after that. This black screen is sensitive to Ctrl-Alt-Delete which causes computer reboot. I did some experimenting: Installed Windows 7 to that partition - it booted fine. Then I renamed "Windows" to "Windows.old" and copied Windows directory from HDD1 as it was, using Far Manager, and got the same troubles - black screen. (Of course I performed renaming and copying from other copy of Windows). So, it seems that problems are inside this installation of Windows, somewhere in its files.

    Read the article

  • Host Name Resolution - ISA 2006 - VPN PPTP

    - by Brian Lee Jackson
    We are running an ISA 2006 server and PPTP VPN connection works fine. Clients are able to connect to internet, access Outlook, CRM, etc. The problem we are encountering is that host name resolution is not working. Example, when connected via VPN I can’t ping any box other than the VPN server by the host name. Nslookup also fails. I can ping everything fine via IP address. But for clients, they need to be able to access their “mapped” drives over the VPN which all are mapped by host name. I recently took over this position and it sounds like this used to work. What would be the best place to check first? I haven’t had much exposure to ISA and have been reading up a bit on installation procedures, etc. DNS is hosted and running on our domain controller, as well as WINS. It isn’t on the ISA box. Is there a firewall policy that perhaps got removed? What usually is required for host name resolution to pass through. Any help would be appreciated, thanks!

    Read the article

  • Video card not detected in POST on initial boot.

    - by Jeff M
    I have a minor problem with my desktop computer after cleaning it out for dust. When I first boot up the computer, the video card does not get detected so I can't see anything. In POST, I'm getting the "can't detect video card" beeps. The boot sequence continues normally, just without video. However, if I restart it (using the restart button) anytime after POST, it would boot up normally. I have no reason to think that the motherboard, video card or PSU got damaged in the process. It was working fine before, works fine after resetting. Took all the necessary precautions while cleaning. On the initial boot, I can hear the video card's fan power up but immediately power down and try again one more time only to fail. After the beep, resetting gets everything running and sounding normally. I've reseated the card a couple of times and reset the BIOS but doesn't seem to help. I'm hoping I won't have to take it out and remove and reinstall everything again. Does anyone recognize these symptoms to know exactly what the problem is? My guess is that the video card isn't getting enough juice initially to be running stable to be detected. I just don't know what I did (or didn't do) to get it to be in this state. It's not a high priority thing for me at the moment, just means I have to always reset it after initially turning it on but will eventually remove everything and reinstall if it comes to that. I don't think the specs are relevant here but just in case, here's the relevant stuff: Motherboard: Gigabyte P35-DS3P Video: EVGA GeForce 8600 GTS PSU: Antec True Power Trio 650W Built ~2 years ago, still running well

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Can't mount Linux usb disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata). Added: I did try to mount /dev/sg3 but mount say that its not a block device. (File say Its a character special device). Added output from dmesg: [ 97.454073] usb 7-1: USB disconnect, address 2 [ 105.913055] hub 2-0:1.0: unable to enumerate USB device on port 3 [ 107.048054] usb 2-3: new high speed USB device using ehci_hcd and address 5 [ 107.162900] usb 2-3: New USB device found, idVendor=1b1c, idProduct=1ab8 [ 107.162903] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=5 [ 107.162906] usb 2-3: Product: CSSD-R60GB2 [ 107.162908] usb 2-3: Manufacturer: Corsair [ 107.162910] usb 2-3: SerialNumber: 10111441000000990069 [ 107.167651] scsi7 : usb-storage 2-3:1.0 [ 108.195543] scsi 7:0:0:0: Direct-Access Corsair CSSD-R60GB2 PQ: 1 ANSI: 0 [ 108.197732] scsi 7:0:0:0: Attached scsi generic sg3 type 0

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >