Search Results

Search found 10695 results on 428 pages for 'some none'.

Page 367/428 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • Blank displays after inactivity, mouse cursor not showing up

    - by Mike Christiansen
    I have a Windows 7 Enterprise x86 machine that is exhibiting some strange problems. After some inactivity (it varies how long it takes), when I come back to my computer, both of my monitors are black. The monitors are on, with a black display. Moving the mouse and pressing keys on the keyboard do not work. This happens at least once a day. When I first encountered the issue, I performed a hard shutdown and restarted the computer. About 10% of the time when I did this, the resolutions on my monitors were messed up. The native resolution on the primary monitor is 1600x900, and the native on my secondary is 1440x900. None of the widescreen resolutions would be present in the display properties. I had 800x600, 1024x768, and 1280x1024. As a workaround to this issue, I've found that if I plug in my secondary monitor as the primary monitor, and leave the secondary unplugged, then Windows will start in 1024x768 on the secondary. Then, I can use Windows 7's "Detect" feature and it finds the 1440x900 resolution, and sets it. Then I have to connect the primary monitor to the secondary port and set it as the primary. Then I can switch the two cables, putting the primary into the primary, the secondary into the secondary. Then use the "Detect" feature again, and it finds the correct resolutions. Some time after performing the above, I discovered that when the screens were blanked, I could simply press "Control+Alt+Delete", type in my password, and everything comes up fine - except my mouse cursor. All applications and features work as intended, except there is no mouse cursor (the mouse actually works though...). I have the "Show location of pointer when I press CTRL key" option selected, so I can press CTRL and use my mouse as normal - but I have no actual cursor. When the computer is in this state, UAC is also not functional. Whenever a UAC prompt appears, the screens go black (the original symptoms) and I have to press Escape to exit UAC. Because of the above symptoms (the UAC black screen thing, etc) I beleive winlogon.exe may be the culprit. However, I have no idea how to fix it. I am unable to restart winlogon.exe due to the problems I am having with winlogon.exe (the UAC black screen) Looking for any ideas.... More information Windows 7 x86 Enterprise Domain environment (I have a secondary account with administrator rights) Dell Optiplex 960 Cannot perform "optional" windows updates due to an activation issue (I am testing a windows 7 image, and an activation infrastructure has not been created yet. However, this issue was happening before I was unable to perform windows update, and windows was up to date at the time Updated video card driver (as an attempt to fixing the resolution issue) Disabled all power saving options Please let me know what else you need to help me solve this issue! Thanks in advance, Mike

    Read the article

  • Wireless 802.11x Disconnects

    - by BillP3rd
    I've looked at (and read) all of the similar questions and none of them get exactly to the issue I'm having at home. I have an 802.11g access point (two, actually, with different SSIDs and on different channels). One is an Airlink AR525W. The other is a Linksys WRT54G v.2. The issue is that at random times, my laptop will lose its wireless connection. This occurs regardless of which access point I'm connected to. When I lose the connection, the affected AP no longer appears in the list of available APs. Also, it doesn't have anything to do with walls or distance. It can happen within 30' and when my laptop is literally within line-of-sight. When it loses the signal, it can take from 10 to 30 minutes to reconnect and it always will without intervention. I've done all the “standard” things to troubleshoot the problem and it has improved. For example, I surveyed other access points in my vicinity and have selected a different channel for each of my APs that no one else nearby is using. Both APs are configured WPA2/AES. I'm down to wondering [Note: This is not a shopping question. I'm not buying a new AP] if the fact that I didn't drop two bills on my APs and instead opted for more modest solutions has anything to do with it? I've oft wondered why anyone would go for the high-end AP when they didn't have to. Also, I am aware of DD-WRT and have chosen not to go there because only one of my APs is supported. Oh, and one final thing. It an HP x64 laptop running Windows 7 Ultimate. The wireless interface is an Atheros AR9285 802.11b/g/n WiFi Adapter. All the latest drivers and service packs have been applied. It did the same thing with my old laptop (a Lenovo) so I don't the problem is in the laptop. It's really annoying when this happens and suggestions of things I haven't thought of or may have overlooked (No, really. As unlikely as it is, I admit that I may have overlooked something :-)) are appreciated.

    Read the article

  • What the hell was THAT?!?

    - by Massimo
    My system is Windows XP SP3, updated with the latest patches. The PC is connected to a Cisco 877 ADSL router, which does NAT from the internal network to its single static public IP address. There are no forwarded ports, and the router's management console can only be accessed from the inside. I was doing two things: working on a remote office machine via VPN and browsing some web pages on the Cisco web site. The remote network is absolutely safe (it's a lab network, four virtual servers, no publicly accessible services and no users at all; also, none of what I'm going to describe ever happened there). The Cisco web site... well, I suppose is quite safe, too. Suddenly, something happened. Strange popups appears anywhere; programs claiming they're "antimalware", "antispyware" et so on begins autoinstalling; fake Windows Update and Security Center icons pop up in the system tray. svchost.exe began crashing repeatedly. Then, finally, after some minutes of this... BSOD. And, upon rebooting, BSOD again. Even in safe mode. Ok, that was obviously some virus/trojan/whatever. I had to install a new copy of Windows on another partition to clean things up. I found strange executables, services and DLLs almost anywhere. Amongst the other things, user32.dll and ndis.sys had been replaced. A fake software called "Antimalware Doctor" had been installed. There were services with completely random names or even GUIDs (!), and also ones called "IpSect" and "Darkness". There were executable files without an .exe extension. There were even two boot-class drivers, which I'm quite sure are the ones that finally caused the system to crash. A true massacre. Ok, now the questions: What the hell was that?!? It was something more than a simple virus! How did it manage to attack my computer, as I am behind a firewall and was not doing anything even only potentially harmful on the web at the time?

    Read the article

  • File/printer sharing issues on network with multiple OSes

    - by DanZ
    My workplace consists of computers running a variety of different operating systems, and I have been running into problems getting some of them to connect to a shared drive and printer over the network. Here is a brief description of the computers involved and the issues I have encountered: 1: Dell desktop, Windows Vista Business-- This is the computer I want the others to connect to. It has a USB printer and eSATA hard drive enclosure that I have set up for sharing, with different accounts for the various users. 2: Fujitsu laptop, Windows XP Tablet edition-- No problems. Can connect to both the shared printer and hard drive. 3: Lenovo laptop, Windows Vista Business 64 bit-- No problems. Can connect to both the shared printer and drive. 4: Apple MacBook, OS 10.4-- Can connect to the shared drive, but not to the shared printer. I am aware that the printer issue is due to a known incompatibility between Vista and OS 10.4 and earlier with regards to Samba. It is not a big problem, however, as this computer can access a network printer. 5: Sony laptop, Windows Vista Home Premium-- Can connect to the shared printer, but not the shared drive. It can see computer 1 and its shared drive on the network, and appears to successfully log in to user accounts. However, if you try to access the shared drive, it says you do not have permission. I have tried both standard and administrator accounts, and none can access the drive from this computer. 6: MacBook Pro, OS 10.5 (there are two of these)-- Can connect to the shared printer, but not the shared drive. They can't see computer 1 on the network. For that matter, they also can't see each other or the older Mac, but can see and access shared folders on the XP machine (computer 2) and can see other PCs in the building. I was able to add the shared printer manually by typing in its network location, but was unable to manually add the shared drive in the same way. So, what I am looking for is suggestions on how to get computers 5 and 6 to connect to the shared drive. Since they can already connect to the shared printer (which is on the same computer as the shared drive), it seems reasonable that they should be able to access the drive as well.

    Read the article

  • Linux Fiber Channel Host Setup Basic

    - by Jim
    I've been googling for about 4 hours now with no luck. I am trying to setup a Linux server running Oracle Server 6.3 as a Fiber Channel host. And then connect it to a Dell Compellent Fibre Channel Host contain a 500GB Volume. The Oracle server itself contains two Brocade 815 FC HBAs. I've discovered their WWN(I think) via cat /sys/class/fc_host/host1/port_name 0x100000051efc3d85 cat /sys/class/fc_host/host2/port_name 0x100000051efc3d9f The next part is where I am at a loss. I've used iSCSI before...is FC the same deal where you have an initiator and a target? If so where do I specific that in linux? I'm also new to Fiber Channel as a protocol, so i am unsure what is needed to make a transaction? WWN and port ID? Similar to IP:Port combination in the Ethernet world. I've read alot regarding using systool, multipath, fc_transport commands, however none of these is recognized as a valid command from Oracle Server 6.3 Appreciate the guidance and assistance. I installed sccsi-target-utils and can now run rescan-scsi-bus and sg_map -x. rescan-scsi-bus.sh -l -w -r Host adapter 0 (megaraid_sas) found. Host adapter 1 ((null)) found. Host adapter 2 ((null)) found. Host adapter 3 (ata_piix) found. Host adapter 4 (ata_piix) found. Scanning SCSI subsystem for new devices and remove devices that have disappeared Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15, LUNs 0 1 2 3 4 5 6 7 Scanning for device 0 2 0 0 .... OLD: Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning for device 0 2 1 0 ... OLD: Host: scsi0 Channel: 02 Id: 01 Lun: 00 Vendor: DELL Model: PERC H700 Rev: 2.30 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 1 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 1 0 3 1 ... OLD: Host: scsi1 Channel: 00 Id: 03 Lun: 01 Vendor: COMPELNT Model: Compellent Vol Rev: 0505 Type: Direct-Access ANSI SCSI revision: 05 Scanning host 2 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning host 3 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7 Scanning for device 3 0 0 0 ... REM: Host: scsi3 Channel: 00 Id: 00 Lun: 00 DEL: Vendor: TEAC Model: DVD-ROM DV-28SW Rev: R.2A Type: CD-ROM ANSI SCSI revision: 05 Scanning host 4 channels 0 for SCSI target IDs 0, LUNs 0 1 2 3 4 5 6 7 0 new device(s) found. 1 device(s) removed. and sg_map -x /dev/sg0 0 0 32 0 13 /dev/sg1 0 2 0 0 0 /dev/sda /dev/sg2 0 2 1 0 0 /dev/sdb /dev/sg4 1 0 3 1 0 /dev/sdc I'm not sure what this all means...

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • How can I simulate blocking RTMP over port 80 on Windows?

    - by Christian Nunciato
    It seems like this should be so simple, but since this isn't my area of expertise, I'm having a hell of a time figuring out how to do it. Basically, I have a Flash app and I'm connecting to a Flash Media Server to stream some content. The URL I'm using to do this, for example, looks like this: rtmp://someserver.com/some/path/mp3:somefile Everything works -- but that's sort of the problem. When I'm trying to do is simulate my users attempting to play back my media under more restrictive conditions than the ones I have here (i.e., none) -- namely being stuck behind firewalls or proxy servers that block access to RTMP streams. Flash, according to Adobe, is equipped to handle proxy servers and firewalls automatically, like so (from the docs): When you do not specify a port number in an RTMP address, Flash will attempt to connect to port 1935. If it fails it will then try to connect to port 443; if that fails, it will try port 80. [And if that fails, it will attempt to connect via RTMPT (i.e., HTTP tunneling) on port 80.] So no coding is required to access ports 1935, 443, or port 80 if you do not specify a port in the RTMP address. The problem I'm having is setting up a reliable environment in which to test that this behavior actually happens. I'm on a Windows machine, for example, so with Windows Firewall, I can block certain ports and protocols (1935, 443), but I don't want to block port 80, because the final fallback protocol (RTMPT) is supposed to run on port 80, and Windows Firewall only gives me enough granularity (as far as I know, anyway) to block "all outbound TCP traffic to remote port 80" -- that is, I can't, apparently, block "all outbound RTMP traffic to port 80" while leaving RTMPT traffic to port 80 unaffected. My understanding thus far is that I'll probably need to set up a proxy server to do this. Is this correct? Or is there a simpler way (on Win 7, at least) to filter out RTMP to 1935, RTMP to 443, RTMP to 80, but still allow RTMPT to 80 (where all four hostnames are identical)? And if I do have to set up a proxy server, what's the simplest way to go on Windows? I've set up WinProxy, which seems a bit janky but apparently works -- but then what I can't figure out is how to tell Windows to force all TCP traffic (including RTMP, RTMPT and HTTO) through this proxy server so I can turn around and reject the requests for RTMP. Any help would be hugely appreciated. This isn't my realm of expertise and I've alreasdy spent more time on it than I probably should. :)

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Showing Directory Root When Launching Rails App Using Apache2 and Passenger

    - by LightBe Corp
    I have done the following in an attempt to host a Rails 3.2.3 application using Apache 2.2.21 and Passenger 3.0.13: Installed gem Passenger rvmsudo passenger-install-apache2-module Added website info in /etc/apache2/extra/httpd-vhosts.conf Added line to /etc/hosts (not sure if this was needed or not; not mentioned in Passenger documentation Uncommented out the line in /etc/apache2/httpd.conf to Include /etc/apache2/extra/httpd-vhosts.conf Restarted Apache When I try to pull up my website the following displays: Index of / Name Last modified Size Description Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8r DAV/2 PHP/5.3.10 with Suhosin-Patch Phusion_Passenger/3.0.13 Server at lightbesandbox2.com Port 443 Here is /etc/hosts entry for the website: 127.0.0.1 www.lightbesandbox2.com Here is my /etc/apache2/extra/httpd-vhosts.conf entry for the website: NameVirtualHost *:80 <VirtualHost *:80> ServerName www.lightbesandbox2.com ServerAlias lightbesandbox2.com PassengerAppRoot /Users/server1/Sites/iktusnetlive_RoR/ DocumentRoot /Users/server1/Sites/iktusnetlive_RoR/public <Directory /Users/server1/Sites/iktusnetlive_RoR/public> AllowOverride all Options -MultiViews </Directory> </VirtualHost> When I do rvmsudo passenger-status I get the following output: ----------- General information ----------- max = 6 count = 1 active = 0 inactive = 1 Waiting on global queue: 0 ----------- Application groups ----------- /Users/server1/Sites/iktusnetlive_RoR/: App root: /Users/server1/Sites/iktusnetlive_RoR/ * PID: 8140 Sessions: 0 Processed: 2 Uptime: 20m 51s None of my assets are in the public folder in my Rails app. I have written an application using the template presented in Michael Hartl's Ruby on Rails Tutorial. The home page is in /app/views/static_pages/home.html.erb. I decided to copy an index.html file in the public folder to see if it would display. It displayed as I had hoped.. Is there a way to get Passenger to find my assets without me having to rewrite my application? Any help would be appreciated. Update 6/23/2012 10:00 am CDT GMT-6 I corrected the problems with my file and have successfully executed the rake assets:precompile command. I still get the index page as before. I have made no other changes. I did a passenger-status command and it is still loaded. Restarting Apache did nothing.

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • Using curl -s in *nix command line not working for some reason

    - by JM4
    I am trying to install composer (though to be honest I really have no idea how it fully works and documentation seems to be quite poor) on my MediaTemple DV machine. I am using their [instructions][1] Trying to install globally using: $ curl -s https://getcomposer.org/installer | php My command line (again using putty and logged into my server as root) thinks for a second, then sets up for next prompt. I run a simple ls -l to check for the file it should have downloaded with no luck. Any idea what could be causing the issue? I have tested and do in fact have curl installed. UPDATE 1 Based on the first answer, the verbose response is: > $ curl -vs https://getcomposer.org/installer | php > * About to connect() to getcomposer.org port 443 > * Trying 37.59.4.156... connected > * Connected to getcomposer.org (37.59.4.156) port 443 > * successfully set certificate verify locations: > * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none > * SSLv2, Client hello (1): SSLv3, TLS handshake, Server hello (2): SSLv3, TLS handshake, CERT (11): SSLv3, TLS handshake, Server key > exchange (12): SSLv3, TLS handshake, Server finished (14): SSLv3, TLS > handshake, Client key exchange (16): SSLv3, TLS change cipher, Client > hello (1): SSLv3, TLS handshake, Finished (20): SSLv3, TLS change > cipher, Client hello (1): SSLv3, TLS handshake, Finished (20): SSL > connection using DHE-RSA-AES256-SHA > * Server certificate: > * subject: /C=CH/CN=dl.packagist.org/[email protected] > * start date: 2012-07-07 23:25:35 GMT > * expire date: 2013-07-10 02:55:12 GMT > * SSL: certificate subject name 'dl.packagist.org' does not match target host name 'getcomposer.org' > * Closing connection #0 > * SSLv3, TLS alert, Client hello (1): > > > [1]: http://getcomposer.org/doc/00-intro.md

    Read the article

  • XP Pro product keys

    - by Bill
    I have a very serious problem. After my XP Home OS was trashed by rogue software - a trial of a thing named TuneUp - I did a clean install, including HDD reformat of Windows XP Pro from a purchased OEM disk. This was Service Pack 2, subsequently upgraded to SP3. I had conclusively mislaid the product key. I had to access data on the machine VERY urgently and I did not then know that under some circumstances Microsoft might agree to provide a replacement. I found what I now know must have been a pirate key on the Net which enabled installation but NOT activation. This of course left me functional but 30 days before meltdown - about 20 days left as I write this. Various retailers want around £100 for retail with matching product key. - this would be paying twice over just to continue use on the same computer. I have neither need nor intention of installing XP Pro on any other computer. I have tried a number of applications claiming to deal with this problem but none of them work. A Belarc profile shows that the pirate key has replaced the original one on the system. I have now found two keys, one of which might be the original, but neither work. I am about to upgrade the HDD and it looks like I will just be passing the problem on when I install XP. I have retrieved a key from the disk, but it is seemingly one Microsoft use in production and does not work. It is 76487-OEM-0015242-71798. The keys I have, one of which which might or might not be the original, are CD87T-HFP4G-V7X7H-8VY68-W7D7M and FC8GV-8Y7G7-XKD7P-Y47XF-P829W (or P819W - I believe it to be the latter, but the box will not accept it). The pirate key which has enabled this install and which is now stored on the system but will not activate is QQHHK-T4DKG-74KG7-BQB9G-W47KG. In these circumstances is it likely that Microsoft would issue a replacement? Is there any other solution? I am not trying to defraud anyone, just to keep on using the product I legitimately bought. Bill

    Read the article

  • Mac OS X Lion 10.7.2 update breaks SSL

    - by mcandre
    Summary After updating from 10.7.1 to 10.7.2, neither Safari nor Google Chrome can load GMail. Spinning Beachballs all around. The problem isn't GMail; Firefox loads GMail just fine. The problem isn't limited to Safari or Google Chrome; Other applications also have trouble with SSL: Gilgamesh and Safari. Any program that uses WebKit (Google Chrome, Safari) or a Cocoa library (Gilgamesh) to access the Internet has trouble loading secure sites. The various forums online suggest a handful of fixes, none of which work. Analysis Fix #1: Open Keychain Access.app and delete the Unknown certificate. The 10.7.2 update also prevents Keychain Access from loading. The Keychain program itself Spinning Beachballs. Fix #2: Delete ~/Library/Keychains/login.keychain and /Library/Keychains/System.keychain. This temporarily resolves the issue, and lets you load secure sites, but a minute or two after rebooting or hibernating somehow magically undoes the fix, so you have to delete these files over and over. Fix #3: Delete ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob*. There is a rumor that the new MobileMe/iCloud service ubd is causing the issue. This fix does not resolve the issue. Fix #4: Open Keychain Access, open the Preferences, and disable OCSP and CRL. This fix does not resolve the issue. Fix #5: Use the 10.7.0 - 10.7.2 combo installer, rather than the 10.7.1 - 10.7.2 installer. When I run the combo installer, it stays forever at the "Validating Packages..." screen. The combo installer itself is bugged to He||. I force-quit the installer, ran "sudo killall installd" to force-quit the background installer process, and reran the combo installer. Same problem: it stalls at "Validing Packages..." Recap The only fix that works is deleting the keychains, but you have to do this every time you reboot or wake from hibernate. There is some evidence that ubd continually corrupts the keychain files, but the suggested ubd fix of deleting ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob* does not resolve this issue. Evidently, something is corrupting the keychain over and over and over. Also posted on the Apple Support Communities.

    Read the article

  • Installing the Newest KeePass for MacOSX from Source

    - by firebush
    I've transitioned our passwords to KeePass. LastPass looks cool, but I prefer a system where we control the database locally rather than it being kept in the cloud. I have a windows and Linux system and both are able to access our KeePass database easily. On my Linux system (Ubuntu), I simply installed KeePass via synaptic and it just worked. So everything was working great, until my wife tried to set up things on her MacBook to access the database. Huge problems. It was so easy on Linux that I didn't expect there to be issues there. In case it's helpful, she's running a fresh install of Mac OSX 10.5.8, Leopard. We simply went to the download site for KeePass: http://keepass.info/download.html Clicked on the link titled KeePass 2.x for Mac OS X from which we retrieved Mono 2.10.5 and KeePass 2.18 from that site (the packages posted there at the time of this writing). Mono installed without problems (at least, none that we saw). She opened the KeePass image and dragged it to the Application side, unpackaging it there. According to the instructions on the KeePass installation instructions, she opened a terminal, changed to the directory in /Applications containing KeePass.exe, and ran it through mono: mono KeePass.exe No application opens at all - we see a blip for it, but then it immediately goes away, indicating to us that it is crashing. Also disconcerting, I see that people are throwing fits about copy-and-paste not working for KeePass 2.18 on MacOSX. Judging from the 2.19 addresses the copy/paste issue. I'm hoping that version will solve all our issues. So here's my question: how can I try out 2.19 on her system. It doesn't seem to be packaged like the 2.18 one is. But we're not scared of building it. I see that the source for 2.19 is here (at the bottom of the page). Can I just download that to her machine somewhere and run something to build it? I'm familiar with automake but not with building .NET source, so please answer gently if this is stupidly easy. :^) btw: tomorrow's my wife's birthday, and this is getting her down. If you know how to navigate these issues, it would be a nice birthday gift for her. Thanks in advance! Update I'll post this since it might be helpful to someone else: I got KeePass 2.18 to run by updating Mono to 2.10.9 (rather than the 2.10.5 given by the site above). With the later version of Mono, it runs without crashing. And yet, I do see the copy and paste issue that others see. I can open a database on her machine, but the incorrect data get's copied. So again, can someone help me install KeePass 2.19? Thanks!

    Read the article

  • Is it possible to configure a CDN so that it will step out of the way for a subset of regional IPs?

    - by rwired
    We have a website which targets customers in China, both expat and local Chinese. We have an ICP license which allows us to host in a datacenter inside China. Internet in China is actually as fast as anywhere else (faster than most places actually), so long as the content is served-up within the boundaries of the Great-Firewall. Anything that crosses the wall is horribly slow. The problem is that most expats have some sort of VPN installed so that they can access all the blocked stuff. What this means is that when they access our site, the traffic first has to go out of China through the firewall to their VPN, and then back in. The performance is terrible, worse than if we were just hosting outside of China directly (which we used to do before the ICP was issued). So I want to use a global CDN to mirror the site automatically, but I only want to deliver the content via the CDN if the user's request IP address is outside of China. Inside China I would like the content to be served by our own server. I also want to be careful with the domain names. We currently use www.xxx.com and www.xxx.cn for language selection purposes, as these perform well in SEO on Google (which the expats use), and Baidu (which the locals use). If possible I would like to avoid having one domain on the outside, and the other on the inside since not all expats use a VPN, and some Chinese speakers also use VPNs. Also some of our legitimate customers in both languages are from outside of China. I also don't want to resort to using something like www2.xxx.com/cn for the outside connection if at all possible, since I have worries about duplicate content and canonical URLs ruining our SEO (unless you know of a quick fix for that). CDNs I'm considering are: Google PageSpeed, CloudFlare, Amazon CloudFront. None of which have datacenters inside China. I have complete control of the .com DNS zone records, but the .cn zones are under the control of the domain issuing body in China. I'm not sure at this time if they would allow even a CNAME to point to an IP outside of China (although I don't see why not). They no longer allow outside registrars like they used to.

    Read the article

  • 500 Internal Server Error when setting up Apache on Ubuntu+Django

    - by ApacheQ
    I tried with Apache on ubuntu 9.04 and get the same error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. And my apache/error.log is: [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] ServerName: 'sapint2' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] DocumentRoot: '/etc/apache2/htdocs' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] URI: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Location: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Directory: None [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Filename: '/etc/apache2/htdocs' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] PathInfo: '/' [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] Traceback (most recent call last): [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch\n default=default_handler, arg=req, silent=hlist.silent) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1229, in _process_target\n result = _execute_target(config, req, object, arg) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/lib/python2.7/dist-packages/mod_python/importer.py", line 1128, in _execute_target\n result = object(arg) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/modpython.py", line 180, in handler\n return ModPythonHandler()(req) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/modpython.py", line 142, in call\n self.load_middleware() [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 45, in load_middleware\n mod = import_module(mw_module) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/utils/importlib.py", line 35, in import_module\n import(name) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/contrib/sessions/middleware.py", line 4, in \n from django.utils.cache import patch_vary_headers [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/utils/cache.py", line 25, in \n from django.core.cache import get_cache [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/init.py", line 187, in \n cache = get_cache(DEFAULT_CACHE_ALIAS) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/init.py", line 179, in get_cache\n cache = backend_cls(location, params) [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] File "/usr/local/lib/python2.7/dist-packages/django/core/cache/backends/memcached.py", line 139, in init\n "Memcached cache backend requires either the 'memcache' or 'cmemcache' library" [Sat Oct 06 09:32:04 2012] [error] [client 10.0.64.10] InvalidCacheBackendError: Memcached cache backend requires either the 'memcache' or 'cmemcache' library [Sat Oct 06 09:51:30 2012] [notice] caught SIGTERM, shutting down [Sat Oct 06 09:51:31 2012] [notice] mod_python: Creating 8 session mutexes based on 150 max processes and 0 max threads. [Sat Oct 06 09:51:31 2012] [notice] mod_python: using mutex_directory /tmp [Sat Oct 06 09:51:31 2012] [notice] Apache/2.2.17 (Ubuntu) PHP/5.3.5-1ubuntu7.11 with Suhosin-Patch mod_python/3.3.1 Python/2.7.1+ mod_wsgi/3.3 configured -- resuming normal operations I need some help Thanks

    Read the article

  • X:\ is not accessible. Insufficient system resources exist to complete the requested service. Help [

    - by Katherine
    I keeping getting the error message from above on multiple computers that I administer. I wasn't sure if I should be posting this on SuperUser or ServerFault so my apologizes if it should go there... Basically, I have at least 5 computers of varying ages (some fresh out of the box!) throwing the above error. X:\ is one of our network drives that is mapped for users. Most of the time if you shut down the biggest application it will fix the problem, but it's becoming an increasing issue, and I can't keep running around fixing it manually. I have tried to do some research, but most of it just states the obvious without supplying a permanent fix. The machines are all running Win XP SP3, with at least 2gb of ram. Sorry for the delay in getting back to people... a lot of good questions. To respond back to people... It is a windows 2003 server that houses the file share. We have about 175 users, however i cannot state how many are actually accessing the information at a single moment. Considering that this is our largest file share, I would say that probably at least 100+. The files we work with are large, but not that big considering that we do a lot of graphical and video work. ~50mb. That being said, this is error occurs simply when trying to gain access to the server itself, not actual files. When I say close a program, I mean that it can be any program. It doesn't matter which program. It varies from machine to machine, and from day to day. Some days it is Firefox, some days it is Outlook, some days it is Excel. There doesn't seem to be a common bond behind which application could be causing the problem. Thank you for the articles, and the recommendation on paging files. I will have to look into that. None of our computers are set to hibernate, so I am going to rule that out.

    Read the article

  • System Requirements of a write-heavy applications serving hundreds of requests per second

    - by Rolando Cruz
    NOTE: I am a self-taught PHP developer who has little to none experience managing web and database servers. I am about to write a web-based attendance system for a very large userbase. I expect around 1000 to 1500 users logged-in at the same time making at least 1 request every 10 seconds or so for a span of 30 minutes a day, 3 times a week. So it's more or less 100 requests per second, or at the very worst 1000 requests in a second (average of 16 concurrent requests? But it could be higher given the short timeframe that users will make these requests. crosses fingers to avoid 100 concurrent requests). I expect two types of transactions, a local (not referring to a local network) and a foreign transaction. local transactions basically download userdata in their locality and cache it for 1 - 2 weeks. Attendance equests will probably be two numeric strings only: userid and eventid. foreign transactions are for attendance of those do not belong in the current locality. This will pass in the following data instead: (numeric) locality_id, (string) full_name. Both requests are done in Ajax so no HTML data included, only JSON. Both type of requests expect at the very least a single numeric response from the server. I think there will be a 50-50 split on the frequency of local and foreign transactions, but there's only a few bytes of difference anyways in the sizes of these transactions. As of this moment the userid may only reach 6 digits and eventid are 4 to 5-digit integers too. I expect my users table to have at least 400k rows, and the event table to have as many as 10k rows, a locality table with at least 1500 rows, and my main attendance table to increase by 400k rows (based on the number of users in the users table) a day for 3 days a week (1.2M rows a week). For me, this sounds big. But is this really that big? Or can this be handled by a single server (not sure about the server specs yet since I'll probably avail of a VPS from ServInt or others)? I tried to read on multiple server setups Heatbeat, DRBD, master-slave setups. But I wonder if they're really necessary. the users table will add around 500 1k rows a week. If this can't be handled by a single server, then if I am to choose a MySQL replication topology, what would be the best setup for this case? Sorry, if I sound vague or the question is too wide. I just don't know what to ask or what do you want to know at this point.

    Read the article

  • Windows 8 ignores more specific route

    - by Lander
    OS: Windows 8 I have a cabled NIC (connected to router with ip 192.168.1.0) and a WIFI NIC (connected to a router with ip 192.168.1.1) . I want all traffic to go through the cabled NIC, except the 192.168.1.0/8 range should use the wifi-nic. This was working fine in Windows 7, without any manual configuration. In Windows 8 however, it's not. My routing table: =========================================================================== Interface List 14...f2 7b cb 13 e7 f0 ......Microsoft Wi-Fi Direct Virtual Adapter 13...b8 ac 6f 54 d2 5c ......Realtek PCIe FE Family Controller 12...f0 7b cb 13 e7 f0 ......Dell Wireless 1397 WLAN Mini-Card 1...........................Software Loopback Interface 1 15...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 16...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.198 30 0.0.0.0 0.0.0.0 192.168.0.1 192.168.0.233 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.0.0 255.255.255.0 On-link 192.168.0.233 276 192.168.0.233 255.255.255.255 On-link 192.168.0.233 276 192.168.0.255 255.255.255.255 On-link 192.168.0.233 276 192.168.1.0 255.255.255.0 192.168.1.1 192.168.1.198 31 192.168.1.198 255.255.255.255 On-link 192.168.1.198 286 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.0.233 276 224.0.0.0 240.0.0.0 On-link 192.168.1.198 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.0.233 276 255.255.255.255 255.255.255.255 On-link 192.168.1.198 286 =========================================================================== Persistent Routes: None I added the rule for 192.168.1.0. I would think Windows should use this rule for the IP 192.168.1.1 because it's more specific than the default-route. However it's not: C:\Windows\system32>tracert 192.168.1.1 Tracing route to 192.168.1.1 over a maximum of 30 hops 1 58 ms 4 ms 4 ms 192.168.0.1 2 68 ms 12 ms 11 ms ^C So... What do I do wrong? And how can I make Windows use the wireless NIC for 192.168.1.0/8

    Read the article

  • User given a login prompt when closing Word documents after viewing them in IE7

    - by Martin Owen
    When using IE7 to view Word documents on our CRM system (an ASP.NET 2.0 application running on Windows Server 2003 and IIS 6 and using Windows authenticaton) I'm finding that a prompt appears when the user closes the document. The Word document is originally opened by clicking a link in the CRM system. Are there permissions that I can set on the folder containing the Word documents to prevent this prompt? I've already tried only allowing the Read permission for the Users group (I've left Administrators with Full Control.) If there's another solution to this without using permissions please let me know. UPDATE: I ran Fiddler as suggested by JD and here is the output from the two responses after the request for the document. The first seems to be a DAV response and the second is the authentication request. How do I prevent the DAV response and just return the .doc on the server? OPTIONS / HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 200 OK Date: Thu, 18 Feb 2010 13:37:36 GMT Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET MS-Author-Via: DAV Content-Length: 0 Accept-Ranges: none DASL: <DAV:sql> DAV: 1, 2 Public: OPTIONS, TRACE, GET, HEAD, DELETE, PUT, POST, COPY, MOVE, MKCOL, PROPFIND, PROPPATCH, LOCK, UNLOCK, SEARCH Allow: OPTIONS, TRACE, GET, HEAD, COPY, PROPFIND, SEARCH, LOCK, UNLOCK Cache-Control: private ------------------------------------------------------------------ OPTIONS /docs/ZONE%20100-105.doc HTTP/1.1 Translate: f User-Agent: Microsoft Data Access Internet Publishing Provider Protocol Discovery Host: <REMOVED> Content-Length: 0 Connection: Keep-Alive Pragma: no-cache X-NovINet: v1.2 HTTP/1.1 401 Unauthorized Content-Length: 83 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="<REMOVED>" X-Powered-By: ASP.NET Date: Thu, 18 Feb 2010 13:37:36 GMT ------------------------------------------------------------------ UPDATE 2: I found a potential workaround for the problem via this post: http://forums.iis.net/p/1149091/1868317.aspx. I moved all of the documents that are being requested into a folder outside of the web root, and created a virtual directory for it (also outside of the web root). When I followed a link to one of the documents in IE and then closed the document I wasn't presented with a login prompt. I should point out that I'm not using FPSE, unlike the person in the forum post. Ideally I don't want to have to put the documents in a separate virtual directory, but this is the simplest solution I've found so far.

    Read the article

  • Installing OpenLDAP: ldap_bind: Invalid credentials (49)

    - by Arcturus
    Hello. I've been trying to set up the OpenLDAP installed by default on Fedora 12, very unsuccessfully. My ultimate goal is to use LDAP authentication for user login and Apache, using the OpenLDAP server running on the same machine. The server is running, but the error I always get when I try to use ldapsearch or ldapadd is: ldap_bind: Invalid credentials (49) I've been following these tutorials, but none of them helped me: http://www.howtoforge.com/openldap_fedora7 http://www.redhat.com/docs/manuals/linux/RHL-9-Manual/ref-guide/s1-ldap-quickstart.html http://www.howtoforge.com/linux_ldap_authentication http://docs.fedoraproject.org/deployment-guide/f12/en-US/html/s1-ldap-pam.html http://www.openldap.org/doc/admin24/quickstart.html First, some components were already installed, and I installed these with yum: yum install openldap-servers openldap-devel Then, I created a basic slapd.conf file in /etc/openldap: database bdb suffix "dc=sniejana-sandbox,dc=com" rootdn "cn=root,dc=sniejana-sandbox,dc=com" rootpw {SSHA}cxdz55ygPu4T3ykg7dgu+L0VRvsFSeom directory /var/lib/ldap/sniejana-sandbox.com I obtained the rootpw with this command: slappasswd -s changeme I also created the /var/lib/ldap/sniejana-sandbox.com directory and made sure the entire contents of /var/lib/ldap were owned by the ldap user. I found two ldap.conf files, one in /etc and one in /etc/openldap. I don't know which is the right one. If I understood correctly, this file is to configure the client. I put this in both: HOST localhost BASE dc=sniejana-sandbox,dc=com I then ran the server with: service slapd start It said OK. Most of the tutorials above say to use the command ldapsearch -D "cn=Manager,dc=my-domain,dc=com" -W to ensure that everything's working. When I execute this command, a password prompt appears, and after entering the password, I get the error. ldapsearch -D "cn=root,dc=sniejana-sandbox,dc=com" -W Enter LDAP password: ldap_bind: Invalid credentials (49) The same thing happens when trying to use ldapadd. I tried with an encrypted and unencrypted password in slapd.conf, it doesn't change anything. Adding a -x for simple authentication doesn't change anything either. netstat -ap confirms the server is listening: tcp 0 0 *:ldap *:* LISTEN 4148/slapd tcp 0 0 *:ldap *:* LISTEN 4148/slapd ps -ef|grep slapd confirms the process is running: ldap 4148 1 0 15:22 ? 00:00:00 /usr/sbin/slapd -h ldap:/// -u ldap Running slaptest procudes config file testing succeeded. I read somewhere that the command ldapsearch -x -b '' -s base '(objectclass=*)' namingContext can confirm the server is running. It appears to work: # extended LDIF # # LDAPv3 # base <> with scope baseObject # filter: (objectclass=*) # requesting: namingContext # # dn: # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 I'm running out of ideas. Am I missing something obvious?

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • NX Client for Windows 7 Opens Remote Desktop in Multiple Windows

    - by Corey Kennedy
    What I'm trying to do: access my Ubuntu desktop remotely via NX Client on my Windows 7 laptop. My environment: server: Ubuntu 10.10 on AMD 1Ghz/512MB RAM PC client: Windows 7 on ThinkPad sl510 Software: server is running NXServer 3.4.0. Using xfce4 window manager. Laptop is using NXClient for Windows In my NX Client "Desktop" settings I've selected "Unix" and "Custom" for OS and environment. I've also specified "startxfce4" as the application to launch when NX connects. I am able to authenticate an NX session on my laptop. By this I mean, I can start the client on my laptop, enter credentials for my Linux user, and NX establishes a connection to the server and attempts to open a remote desktop window. The problem, though, is that this remote desktop is "fragmented" into many Windows. One window will display the bulk of my desktop (complete with desktop icons for "Home," "File System," and "Trash") while another window will contain the taskbar, and another window will contain the application strip. I can select each of these Windows individually, but I cannot click on any objects within them. I've searched Super User, Ubuntu Forums, NX help, Server Fault, and tried many Google searches - none have turned up another case of this particular problem. I'm stumped. Does anyone have any suggestions for what I might try? I'm guessing the problem has to do with my xfce config files, but I've only just setup this server - it's been a long time since I've used Linux and there's a lot I just don't know. What I am NOT trying to do: use Desktop sharing from Ubuntu, whereby I VNC into a desktop that I've already established on the server. I am trying to configure this Linux box as a headless server that I can stash someplace out-of-the-way in my house, then interact with through my laptop. I don't want to have a monitor or keyboard connected to the Linux box. Thanks for your help! edit: 1/19/2011 Well, this is truly bizarre. To my knowledge I've made no changes to either system - the laptop or the server. But today after starting up the server for the first time in a few days, and making sure that nxserver was running, I was able to connect with the nxclient from my laptop with no problems. I have a full desktop in a single window and I am able to interact with it normally. This is really weird, but the problem seems to be resolved.

    Read the article

  • Certain Programs cannot access internet

    - by Cindy
    Operating System: Windows 7 (x64) Problem: Certain Programs are unable to access the internet. They claim that there is no connection when you already are connected. Hello, before we start. Just letting you know I'm new here, and I'm very new to Windows 7. I installed it two days ago. I just installed Windows 7 on my laptop and I have a few problems. I play World of Warcraft, as well as a variety of games. And when I first attempt to log into the game, I get a windows error message, but it doesn't stop there. I thought World of Warcraft got corrupted during the upgrade. It seems that I am unable to access the internet from other online games as well. Most say in along the lines of "Cannot connect to patch server, try again later." I cannot use a downloader Also, I have internet explorer. The x32 version of the browser cannot connect to the internet, and when I try to enter "google.com", it says the same thing. I'm only accessing this site through Internet Explorer x64, which I would have been fine with is it's compatible with Adobe Flash. The only thing that seems to connect to the internet are Internet Explorer x64 and Windows Live Messenger. Here are the steps I have taken, but none worked. 1.) Disable Windows Firewall 2.) Have Windows Firewall Enabled, but allow the specific programs to access internet. And allowed all incoming access. 3.) Disabled UAC, Ran the programs as an admin, and set compatibility to Vista. 4.) Uninstalled an anti-virus program. (McAffee Security Suite 2010) 5.) Reinstalled the programs 6.) Reinstalled Windows 7 7.) Retaken the steps on the Administrator account. Please assist me in this problem. I need to get back into the game. Thanks so much in advance.

    Read the article

  • Can't connect to shared folders anymore?

    - by HuskyHuskie
    My home server is running Windows Server 2008 R2. I've had it running for almost a year now without any issues with shared folders. This past week I had an issue with my modem which required it to be power cycled and with that I power cycled my router. After that I haven't been able to connect to my shared network folders. I have no idea why that would even cause an issue as I've power cycled my networking equipment in the past without issues and none of my settings appear to have been lost. I am mapping these drives on my Windows 7 Ultimate machine using "Map Network Drive", from there I enter \\SERVER\Storage as I'm trying to connect to my shared folder named Storage. I receive the following error every time I try mapping the drive: Windows cannot access \\Server\Storage Check the spelling of the name. Otherwise there might be a problem with your network. To try to identify and resolve network problems, click Diagnose. Details: Error code: 0x80070035 The network path was not found. When I click Diagnose I get the following: Problems found file and print sharing resource (SERVER) is online but isn't responding to connection attempts. The remote computer isn't responding to connection on port 445, possibly due to firewall or security policy settings, or because it might be temporarily unavailable. Windows couldn't find any problems with the firewall on your computer. I've tried this from multiple computers with the same issue too. To resolve the problems so far I've tried: Disabling the firewall on SERVER Reinstalling File Services Modifying NetBT\Parameters registry values Adding a custom inbound rule for port 445 Adding port forwarding on my router for port 445 Recreating the shared folders Checking and rechecking the shared folder permissions. Resetting my user account password on the server used to access the shared folder. I'm pulling my hair out with this problem mainly because it came out of nowhere. It was working fine the night before and the next day it just stopped working. Any ideas of what I could try next are much appreciated. It should also be noted that this server is used as a web server too and that functionality still works correctly.

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >