Search Results

Search found 5084 results on 204 pages for 'brute force'.

Page 100/204 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Dell PR03X port replicator and DisplayPort to DVI adapter not detecting second monitor

    - by yothenberg
    I have a Dell M4400 connected to a PR03X port replicator/docking station. I use the DVI port to connect it to a first Dell 2208WFP monitor and I'm trying to use a DisplayPort-to-DVI adapter to connect it to a second Dell 2208WFP monitor. The second monitor, connected via the DisplayPort-to-DVI adapter immediately goes into sleep mode and the laptop doesn't detect it. What is really weird is that it did detect it the first time I plugged it in but after I unplugged the monitor and plugged it back in it stopped working. I swapped the monitors round and it detected them both but after unplugging the monitor connected via the DisplayPort-to-DVI and plugging it in again it stopped working. Both monitors work if plugged in directly to the DVI port. Is there some way to force re-detection? Any ideas?

    Read the article

  • Machine freezes when configuring dual display on Ubuntu 9.10 (karmic)

    - by sa125
    Hi - I'm trying to configure dual displays on an Ubuntu 9.10 machine. When I connect the 2 screens (1 VGA input, the other DVI), I see them in a mirrored display. I opened up Display Settings and unchecked the 'mirror screens' box, and when I clicked apply, the machine froze and I had to force restart it. This happened repeatedly for about 6 times until I gave up. How do I set it up to boot up normally with dual display working? thanks. edit: I thought it might be related to the virtual screen size, so I tried to edit /etc/X11/xorg.conf to add: SubSection "Display" Virtual 2560 1024 EndSubSection But that didn't do much. Each screen works fine on it's own, and together with mirrored display.

    Read the article

  • How do I get rid of phantom bookmarks in Google Chrome on Mac OS X 10.6?

    - by Philip
    I'm running Chrome 5.0.375.38 on OS X 10.6 Snow Leopard and although I'm positive that when I installed it I told it NOT to import my Firefox bookmarks, it nevertheless still accessed my OLD Firefox bookmarks (including some that I deleted) when I used the location bar. HOWEVER, when I opened the bookmarks manager, it said that I have no bookmarks whatsoever. Seeking to solve this problem, I installed XMarks on both FF and Chrome, and forced Chrome to download the server bookmarks. Now Chrome lists all my current FF bookmarks, but STILL sees the old, phantom bookmarks from when I first installed Chrome in the location bar, even though when I search for these same bookmarks in the bookmarks manager they don't show up. Aargh! Any ideas? Even if there's some way to force-kill-wipeout-clean-erase ALL my Chrome bookmarks that's fine as long as it kills the phantom ones b/c I can still overwrite with XMarks. Thanks!

    Read the article

  • Linux Flash Player with 2 Monitors: always full-screen on primary monitor

    - by CarlF
    My setup at home uses a laptop, with a larger external monitor in addition to the built-in LCD panel, which is primary. I can see the larger monitor from the rest of the room and use it as my TV, for playing DVDs and various types of web video. However, it isn't ideal for Flash video. For instance, if I watch a video from Hulu or any other Flash-based site, I can expand it to full-screen mode. However, no matter which monitor the browser window is on, the full-screen mode is always on the laptop LCD panel, which is both too small and not visible from most of the room. Does anyone know of a way to force the Flash video to play full-screen on the monitor I select instead of the primary? My video chipset is NVidia, using kernel 2.6.31 (Ubuntu). Thanks.

    Read the article

  • Shut Down took way too long because of "Background Programs"

    - by Christopher Chipps
    I tried shutting my desktop PC (with Windows7) down but after several attempts (like 4 or 5) at Start -- Shut Down, the GUI was still there and it was not shutting down. I didn't think there were any programs running when I pressed Shut Down, so I went into the taskbar (Ctrl + Alt + Del) to check out the processes. Once I did that, a screen appeared with a message stating that there are "background programs" still running and it gave me an option to "Force Shut Down" which I pressed and it shut down normally. Does anyone know why this would happen?

    Read the article

  • LVS TCP connection timeouts - lingering connections

    - by Jon Topper
    I'm using keepalived to load-balance connections between a number of TCP servers. I don't expect it matters, but the service in this case is rabbitmq. I'm using NAT type balancing with weighted round-robin. A client connects to the server thus: [client]-----------[lvs]------------[real server] a b If a client connects to the LVS and remains idle, sending nothing on the socket, this eventually times out, according to timeouts set using ipvsadm --set. At this point, the connection marked 'a' above correctly disappears from the output of netstat -anp on the client, and from the output of ipvsadm -L -n -c on the lvs box. Connection 'b', however, remains ESTABLISHED according to netstat -anp on the real server box. Why is this? Can I force lvs to properly reset the connection to the real server?

    Read the article

  • Forcing logon to Air Watch server upon joining wifi

    - by DKNUCKLES
    I'm setting up a wireless controller that I would like to leave as unsecured. When a user connects to this network they need to be forwarded to a specific page where they can authenticate with the Air Watch system they have in place. Once authentication takes place, a profile will be downloaded to their device and we can administer the devices accordingly. I'm mulling over how I can force the page to the user when they log in. The methodology I'm thinking about working with is creating a NAT rule for that VSC that would forward all port 80 and 443 traffic to the airwatch server. Once they authenticate, a profile will be downloaded which will connect the devices to an Virtual Access Point who's SSID isn't broadcasted. Is this methodology correct or can someone think of an easier / more efficient way of accomplishing this? The controller is an HP MSM720 for what it's worth.

    Read the article

  • DBan not working because disk has bad sectors? [migrated]

    - by canadiancreed
    Attempting to wipe the drive of a laptop that I have before it's sold, and normally use DBAN to do so. However this time it starts and then finishes instantly with the following message. "DBAN finished with non-fatal errors This is usually cause by disks with bad sectors" Have tried multiple flags such as noverify to force it to skip this check (it doesn't show bad sectors in the OS scan in windows). but the error always comes back. This is the only time that I've seen this message, as every other of the few drives I've used this software on usually take 3-5 hours to do their job.

    Read the article

  • rsync command deletion error "IO error encountered -- skipping file deletion"

    - by Jam88
    I use rsync command to take backup of files from one of my ubuntu server to another ubuntu machine. Backup server trigger a script that use rysnc command. Here is the command I use rsync -rltvh --partial --stats --exclude=.beagle/ --exclude=.* --delete-after root@live_server:/home/ /home/live_server_backup/home /tmp/logfile.log 2&1 live_server is ssh-able without password. So it works. Now problem is with --delete-after option After all file synced .At the end I can see deletion procedure skipped.logfile error is like IO error encountered -- skipping file deletion When i tried to find log there were some error while file sync rsync: send_files failed to open "/home/xyz/Desktop/PPT_session_1_context.pdf": Permission denied (13) So my understanding is as rsync could not read all the files from target for safety reason it is skipping the file deletion. Is there any way to make --delete-after work even if there is some permission error? I do not want to use force deletion as it will be dangerous in some situation.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks

    Read the article

  • Exchanged HDD in MacBook Pro - OSX installation disk shows prohibitory sign

    - by Hedge
    I exchanged the HDD in my 2007 MacBook Pro and removed the dvd-drive because it was making terrible noise everytime I booted the MacBook. The new HDD is a Corsair Force F120 SATA SSD. Everytime I try to launch an OSX Lion installation disk or USB stick I get the grey prohibitory sign and the machine shuts down after a while. Since I didn't format the SSD beforehand there is still Windows 7 on it. It shows the white progress bar with the message "Windows is loading files" but never finishes it. I don't want Windows on that machine, just thought this fact may be important. Any ideas what is wrong?

    Read the article

  • Pass quoted argument string to Start-Process in PowerShell

    - by Luke Puplett
    Hello I'm trying to very simply run an executable and pass a file path in as the first argument. In DOS, this would be the command: import.exe "C:\Some Path\With\Spaces.txt" By placing the path in quotes, the path is correctly parsed as the first token into the executable. In PowerShell I'm using this: $feeds = dir 'T:\Documents\Company\Product Development\Data foreach ($feed in $feeds) { start -FilePath import.exe -ArgumentList $feed.FullName } The problem with the Start-Process cmdlet in PowerShell is that it uses a string array for arguments, so the path is broken up and sent to the executable as separate tokens. Quotes in PowerShell force $feed.FullName to be treated literally. Double quotes "" make PowerShell not see anything in the argument list. "The argument is null or empty." it tells me. I expect that this is a known headache and has had a known workaround from day one. Thanks Luke

    Read the article

  • GNU/Linux: SAS-disk detected as /dev/sg7 - not as /dev/sdb

    - by Ole Tange
    I have just installed a SAS disk into a Debian server. It was detected correctly and everything was fine. Then I moved the SAS disk to a different Debian server, the same hardware model and running same version of Debian, but here the SAS disk is detected as /dev/sg7 and not /dev/sdb. smartctl -a /dev/sg7 works fine, but fdisk and cat hang. I tried putting the SAS disk in another slot: Same problem. How can I force the SAS disk to be detected as /dev/sdb? # uname -a Linux maxwell 3.2.0-4-amd64 #1 SMP Debian 3.2.41-2+deb7u2 x86_64 GNU/Linux

    Read the article

  • Utility to unmap a network drive when the screen saver starts

    - by JimR
    I'm looking for a way to unmap network drives when the screen saver turns on. I have a few users that share an external, encrypted drive (Samba share, not windows) and they have a requirement to disconnect the drive mapping when the local machine is idle. I'd also like it to warn them if there are open files on the mapped drive, if possible. There is also a requirement to force the password to be reentered before mapping when the machine comes back from idle. Is there a Windows setting or utility out there in the wild that meets these requirements?

    Read the article

  • ssh auto-forward?

    - by Jakobud
    I'm not sure if there is a specific name for this type of activity or not, but anyways: When a user ssh's into machine1, is there someway to force them to automatically ssh to machine2? For example, if user Bob ssh's into server1, can you set something up on server1 so that when Bob successfully logs in, he automatically ssh's to server2, so that he only has access to server2 and isn't really able to do anything on server1? Also, when he exits, it would disconnect him from server2 but also server1. Does that make sense? Is that possible?

    Read the article

  • Config xampp never to cache pages from localhost

    - by Michael Mao
    Hello everyone: I am not a webmaster, and that was exactly the reason why I installed XAMPP 1.7.2 on Windows XP SP2 instead of manually configuring Apache, MySQL and PHP to cooperate each other. Now I am having to problem to disable caching pages from localhost. Some suggested just force the browser not to cache, using Firefox web developer bar or something similar; but I feel it would be better if I could configure the Apache server in XAMPP to never allow pages from localhost to be cached. I guess this is done somewhere in httpd.conf? LoadModule cache_module modules/mod_cache.so Would this module be helpful in this case? Doc here : mod_cache I am not very sure this would resolve the problem. Could anyone confirm this approach feasible? I'd like to work it out myself, given the fact that I am on the right track... Many thanks in advance

    Read the article

  • How do you get SharePoint back in sync when you change a user's sAMAccountName?

    - by Kirk Liemohn
    I have observed on SharePoint 2010 that if you change the sAMAccountName of a user after the user has logged into a SharePoint site collection, the tp_Login field in the UserInfo table does not get updated. It still has the old user Id. While the user can log into SharePoint under the new account, these new logins do not update the table. I have code that looks at the SPUser.LoginName and this value appears to be the tp_Login field value which is now old. The fact that this value is old causes my code to fail. I suspect this behavior is identical in SharePoint 2007. Is there any way to force SharePoint to recognize the new sAMAccountName? I suspect that profile synchronization might help, but I would like for my solution to work with WSS 3.0 and SharePoint 2010 Foundation. I considered manually updating the database table, but I would like to stick with supported approaches.

    Read the article

  • DPMS, keep screen off when lid shut

    - by Evan Teran
    I have a laptop running linux. In my xorg configuration, I have DPMS setup so that the screen automatically turns off during several events. In addition to that I have to the following script tied to ACPI lid open/close events: #!/bin/sh for i in $(pidof X); do CMD=$(ps --no-heading $i) XAUTH="$(echo $CMD | sed -n 's/.*-auth \(.*\)/\1/p')" DISPLAY="$(echo $CMD | sed -n 's/.* \(:[0-9]\) .*/\1/p')" # turn the display off or back on export XAUTHORITY=$XAUTH /usr/bin/xset -display $DISPLAY dpms force $1 done Basically, this script takes one parameter ("on" or "off") then iterates through all of my running X sessions and either turns on or turns off the monitor. Here's my issue. When I close the lid of the laptop, the screen goes off as expected, but if a mouse event occurs (like if something bumps into the table...) then the screen turns back on even though it is closed (I can see the light through the side of the laptop). Is there a way to prevent the screen from turning on during a mouse event if the lid is closed?

    Read the article

  • Group Policy Task Schedule deployed to User Configuration not working, works when in Computer Configuration?

    - by user80130
    I added a Scheduled Task on my Windows 2008 R2 Domain Controller in the Group Policy Manager: MyDomain Policy User Configuration Preferences Control Panel Settings Scheduled Tasks Basic Task, like starting notepad, when user unlocks his workstation. This should show up in the client workstation's task scheduler, but it dosn't. No errors or anything like that. If I use the "Computer Configuration" instead of "User Configuration" the task appears, and I'm able to run the task. I've tried the gpupdate /force followed by gpresult and checked the report, but it dosn't contain the GPO Scheduled Tasks I created? (again, does show up when using "Computer Configuration".) The issue is that I have to run the application in the current users context, and only on a specific Employee OU, and thereby limit this task only to Employee Workstations and not apply the application when the same employee log on to internal servers and such. Primary domain dontroller is a Windows 2008 R2, workstations Windows 7 Enterprise. What am I doing wrong ?

    Read the article

  • Keep-Alive header not sent from Tomcat 5.5 http connector?

    - by Codek
    Hi, We're currently using a hardware load balancer, which then goes to Apache and that then goes to Tomcat 5.5 via the AJP connector. We've decided to dump apache for various reasons - In our current system it doesnt provide any advantage. However when I look at the headers sent when we do this, the "Keep-Alive: timeout=15 max=96" header doesnt get sent when you use the tomcat http connector Interestingly, i can find no documentiation on "keepalivetimeout" for tomcat5.5, but i can for tomcat6. But neither can i find evidence that tomcat5.5 doesnt support this setting. here's my connector: <Connector port="8090" maxHttpHeaderSize="8192" maxThreads="400" minSpareThreads="150" maxSpareThreads="300" enableLookups="false" connectionTimeout="2" maxKeepAliveRequests="400" disableUploadTimeout="true" /> So; Is there any way I can specify the keepalive timeout if we use the http connector with tomcat 5.5, and force this header entry to be sent? Thanks, Dan

    Read the article

  • Joining an Active Directory domain using netdom

    - by Cheezo
    I have a simple script to join an AD domain and rename the computer. When I execute these commands directly on the CLI, it works fine. When I execute the same via batch file, I get an error saying The network path was not found I am running as Administrator with full privileges. I have googled around microsoft forums but my case is unique because it works from the CLI and not from the batch file netdom join %%computername%% /domain:OPSCODEDEMO.COM /userd:Administrator /passwordd:xxx netdom renamecomputer %%computername%% /NewName:%hostname% /Force The environment is Windows 2k8 R2 SP1 running on Ninefold Cloud (Xenserver).

    Read the article

  • WHS - client does not shutdown after backup

    - by justjoshingyou
    I have my WHS/Win 7 laptop configured to do automatic backups through the connector. The backups run just fine every night. I put my computer in hibernate when I go to bed, it wakes and runs the backups, but then it doesn't go back into hibernate or shutdown. I'd like to be able to force it to shutdown after the backups are completed. I don't see a scheduled task in the system for the WHS backups. Any idea how I can get my laptop to shutdown after the backups are complete? Thanks.

    Read the article

  • Google Chrome app file opened is not always in front, receiving focus

    - by Philipp Lenssen
    On Windows Vista, I'm launching files associated with a Google Chrome "Application Shortcut" app by double-clicking them. While they will always open this way, they will sometimes not receive focus -- i.e. they sit in the task bar quietly but don't actually show in front on the screen (until I click on the app in the task bar again). Does anyone know how I can force the just-opened app to receive focus? The app being opened actually contains a web app I can customize, so I also have access to performing JavaScript commands post-startup. For background information, here is the regedit file I originally used to create the file association. Might well be this is a Google Chrome internal issue... apps I open with other editors get their focus alright, AFAIK. Thanks!

    Read the article

  • Robocopying DLLs and EXEs

    - by BinaryDeuce
    I've used Robocopy for several years now for backup purposes, and never looked back at any backup application I used in the past. I replicate whole valued directories to a removable HD, than from this HD to and equivalent system. Over time, quite a few DLL and EXE have accumulated in these directories, none of which Robocopy can seem to copy to my external HD. Thus, my 2 computers drift apart slowly... Is there anyway, using one of the eigthy-nine (89) switches (or one of the 2^89 - 1 = 6,1897002 × 10^26 combinations thereof) to force Robocopy to robotically copy EXEs, DLLs and other of my "access is denied" friends? Thanks

    Read the article

  • HP ProLiant Smart Array "lock up" code 0x11

    - by ewwhite
    I've a ProLiant DL580 G7 server that experienced a storage subsystem failure during production. The system appeared available and responded to pings, but all I/O access stalled (the system load must have been 100+). The ASR did not trigger at the specified watchdog timeout. I had to force a reboot from the ILO. During POST, I received the following error: A controller failure event occurred prior to this power-up. (Previous lock up code = 0x11) I haven't pulled the ADU report yet, but I'm curious as to what this error actually means. I was not responsible for the the installation, but can see that the firmware is very old. But if there's anything else I should know about the error, I'd like to know for the post-mortem report. edit - I should add that the server had 95 days of uptime prior to the lock up.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >