Search Results

Search found 9335 results on 374 pages for 'random thoughts'.

Page 255/374 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Why do I get this message from chrome when navigating to https://www.amazon.com?

    - by Denis
    This is probably not the site you are looking for! You attempted to reach www.amazon.com, but instead you actually reached a server identifying itself as *.voxcdn.com. This may be caused by a misconfiguration on the server or by something more serious. An attacker on your network could be trying to get you to visit a fake (and potentially harmful) version of www.amazon.com. Intermittently, I get a blank page when going to http://www.amazon.com. So I stuck an 's' in the URL, making it https://www.amazon.com and got that message above (with the nice red screen) from Chrome indicating there might be some monkey business going on. After hammering on the URL a bunch of times and pulling it up in Chrome's developer tool to look at the network traffic on it, the url (without the s) started behaving. The url with the s just hangs, but the red screen no longer comes up. Some specs... I've got a macBook Pro, Snow Leopard, Time Warner cable. I've had enough strange stuff happening over the past couple months (google.com, youtube.com, amazon.com not coming up or loading strange error messages with random reference numbers) that I finally decided to switch to OpenDNS. Still having problems, though.

    Read the article

  • Cannot Delete Item "Could Not Find This Item" issue

    - by aronchick
    A friend sent a long a file (a .rar) he wanted me to check out for him before he installed it. I downloaded it and unrared it with no problems, but it was full of .exe's instead of the intended contents (fonts) so I advised him to delete it immediately and not use. I then proceeded to do the same, but the folder simply will not delete. Oddly the files went fine, and I never ran anything, but this is what I'm seeing: Could not find this item This is no longer located in C:\Users\This_User\Desktop. verify the item's location and try again. I've tried the following things with no help: Using "Unlocker" to Unlock and delete Using move on reboot and rebooting Using PendMoves (from sysinternals) and rebooting Elevating a cmd line, doing a dir /x to get the short name of the folder, and then del 'shortna~1' Moving the folder to a new folder and then trying to delete the parent folder I'm on Windows 7 RTM, very fresh install. Any thoughts? Update: Just to confirm, I've run Hijack this and half a dozen other malware detectors, and everything came back clean (no extra processes, no other obvious badness). Rebooting in safe mode didn't help either.

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • File sharing for small, distributed, non-technical, non-profit organization?

    - by mnmldave
    Problem: I've started volunteering for a small non-profit with fewer than five non-technical Windows users who need to share 20-30GB of files (Office documents, images, PDFs, etc.) amongst themselves online. Background: The users are accustomed to a Windows network share on a machine that backed up their data locally. An on-site "disaster" has forced them to work from their homes for awhile and to re-evaluate their file sharing needs (office was located in an old building with obvious electrical issues, etc.). Access to time from volunteers with IT experience seems to be difficult. Demonstrably minimizing energy consumption is a nice-to-have. I'm currently considering Jungle Disk (a Desktop account shared amongst the handful of employees since their TOS and my inquiries to their helpdesk seem to indicate this is permissible). It appears easy-to-use, inexpensive, secure, has backup functionality, and can scale to accomodate more data when needed. I've not used it myself though (have only used Dropbox for personal use) and systems isn't my area of expertise, so am worried I might be jumping on a bandwagon. That said, any suggestions, thoughts or similar experiences would be really appreciated.

    Read the article

  • Apache randomly loses permission to see files.

    - by arbales
    I have a server (Leopard Server, not my choice) running Apache and MySQL. Several months ago, the server began to raise "Forbidden" errors at random intervals, preventing access to a PHP application. This behavior randomly ceased. Now, several days ago I installed Passenger and deployed a Sintra/Rack application. The application runs as a user acarneg (for example) from /Library/WebServer/Documents/presto/current/public, acarneg owns the entire structure. The _www user has access to the directory via ACL chmod +a "_www allow read,write,...". Everything works great! But after a randomish interval, often ~12 or ~24 hours, Passenger throws an error that also prevents the PHP application from running. Passenger Error #2. Cannot stat file config.ru. Permission denied. But the permissions haven't changed (confirmed) and all one has to do to resolve the error is sudo apachectl graceful. If the permissions aren't changing and Apache doesn't seem to have a legit problem, what is causing this mess? Why did it stop before, and why has it resumed!?!?!? Thanks for the help!

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Linux Port 80 to redirect to a Windows box

    - by Richard Staehler
    I have 2 servers here at work. One is a Windows 2008 Server R2 (for safety's sake, lets use 192.168.1.100) and the other is a Fedora 14 (192.168.1.101). Currently when you hit our subdomain, x.test.com, our routers tell it to go to our Fedora box, and since Apache is installed and listening to port 80, it displays the Fedora Apache Test Page. It's obvious that I don't use port 80 for this machine, however I do use NAGIOS on it and its always nice to be able to access that from anywhere in the world. So when I want to access it, I just type x.test.com/nagios. Now here comes the dilemma.... On the Windows R2 box, we recently have installed a program that requires us to setup a web server using IIS7. Because of this application, I'm going to be creating a new subdomain called y.test.com, but since we only have 1 WAN/router, it will still get pointed to our Fedora box. That being said, it wants to use port 80 as well (or whatever port I damn well wish to assign it). So my question is: since our router is pointing to the Fedora 14 box (.101), and I want to make sure I can access NAGIOS from anywhere in the world, how do I tell Apache (httpd) to redirect port 80 to the other server (.100)? If not possible, what are my other options? I have rinetd installed on Fedora and have even tried the option 192.168.1.101 80 192.168.1.100 80 and it didn't seem to work "because port 80 was already bound" Thoughts? and Thanks!

    Read the article

  • Make router forward HTTP and HTTPS traffic to external App

    - by cOsticla
    I use a Linksys WRT54GL router with DD-WRT v24-sp2 (10/10/09) std (SVN revision 13064) which I am trying to make forward all HTTP and HTTPS traffic to an external app called Fiddler (used as proxy) on port 8888. After a lot of digging on this site, dd-wrt forum, dd-wrt.com and WWW, I am stacked with the following piece of code that works (thanks to the guys from dd-wrt support for this info), but only for forwarding HTTP traffic (port 80): #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp --dport 80 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp --dport 80 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I tried to edit the code from above and I came up with the following but it's still not forwarding HTTPS but just HTTP traffic: #!/bin/sh PROXY_IP=1234567890 PROXY_PORT=8888 LAN_IP=`nvram get lan_ipaddr` LAN_NET=$LAN_IP/`nvram get lan_netmask` iptables -t nat -A PREROUTING -i br0 -s $LAN_NET -d $LAN_NET -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -t nat -A PREROUTING -i br0 -s ! $PROXY_IP -p tcp -m multiport --dports 80,443 -j DNAT --to $PROXY_IP:$PROXY_PORT iptables -t nat -I POSTROUTING -o br0 -s $LAN_NET -d $PROXY_IP -p tcp -j SNAT --to $LAN_IP iptables -I FORWARD -i br0 -o br0 -s $LAN_NET -d $PROXY_IP -p tcp --dport $PROXY_PORT -j ACCEPT I am not sure if is possible to forward HTTPS traffic anymore by just using a router so I'd appreciate if somebody will share his thoughts and/or examples regarding this subject here. Thanks!

    Read the article

  • SharePoint solution package not deploying to all front-ends

    - by Alex
    I have a WSP that contains a web part. It's being built using WSPBuilder. Most of the time, the WSP deploys perfectly. However, in two of our test environments (and sadly, in production, too) the WSP doesn't deploy properly to all the web front ends. The assemblies make it into the GAC, and the .webpart files get provisioned. The problem is that a tool part that the web part relies on for configuration simply fails to appear. I've determined that every time this has happened, it has been isolated to a single web front end. I've been able to resolve the issue by doing an stsadm -o deploysolution to re-deploy the solution, and in one instance it was resolved by the end user deactivating/reactivating the feature. Unfortunately, though, this has made it impossible to determine if the control isn't being deployed properly, or if it's some other issue. Any thoughts on this? Could it be a problem with the WSP, or is it likely to be environmental?

    Read the article

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • Windows AD DNS: Event ID 5504

    - by Chris_K
    Two of my AD controllers (both running DNS service) appear to be having a similar issue. Both are throwing lots of events in the DNS events that look like this: Event Type: Information Event Source: DNS Event Category: None Event ID: 5504 Date: 5/24/2010 Time: 11:51:38 AM User: N/A Computer: ALPHA Description: The DNS server encountered an invalid domain name in a packet from 76.74.137.6. The packet will be rejected. The event data contains the DNS packet. That will come with the same event, same time, with a packet from 76.74.137.7 as well. I know this is "Information" not an error, but since it is new and different it bothers me (yes, I fear unexplained change!) Both machines are running Windows 2003 R2 SP2. The DNS servers are not exposed to the internet. Both DNS servers are configured to use OpenDNS for Forwarders. For both servers, this started about a week ago. Any thoughts on: 1) should I be concerned? 2) how can I stop/fix this? To keep it interesting, I have a 3rd AD / DNS box. Same domain, different Active Directory site. Same forwarders, yet doesn't have this issue.

    Read the article

  • Wireless router blocking some sites while using ethernet is fine

    - by Micke
    I'm using Windows 7 and my router is a wireless Apple Airport Express that is approximately two years old. Suddenly I can't access some sites (for example www.sthlm.friskissvettis.se, or www.vegetarian-shoes.co.uk, some streamed tv-shows on svtplay.se, and a number of other random sites) when connecting to internet with my router. It worked good until recently and I'm fairly sure this problem emerged when my ISP upgraded from 10/10mbit to 100/10mbit speed. Most other sites like facebook and google works fine. When using my network cable to connect to internet everything works fine and I can access these sites. Firmware is current and I've tried reseting the router to factory defaults. Tried different browsers, and I can't ping the "blocked" sites either. Tracert www.sthlm.friskissvettis.se starts with 10.0.0.1 and continues through a number of long addresses until it says timeout. The last working address before timeout was sth-tcy-ipcore01-ge-0-2-0.neq.dgcsystems.net [83.241.252.13], if it matters. Tracert www.vegetarian-shoes.co.uk also eventually gives me a timeout. When the network cable is plugged in, I still get timeout on tracert www.sthlm.friskissvettis.se even though I can access the site in Chrome. Weird. www.vegetarian-shoes.co.uk doesn't give me a tracert timeout when the cable is plugged in, and I can access the site as usual. I've tried changing DNS servers to use opendns servers instead, but to no use. I've tried pinging these two sites with a lower MTU packet size (with this method: http://www.richard-slater.co.uk/archives/2009/10/23/change-your-mtu-under-vista-or-windows-7/), but still can't access them through ping... I don't know what to do anymore.... any suggestions???

    Read the article

  • IIS running but not serving content

    - by Kyle
    I have an internal dev server running Windows 2k8 R2 with the Web and FTP Server roles set up which won't serve any content at all. Trying to connect from another host via telnet yields 'connection failed': c:\>telnet devserver 80 Connecting To devserver...Could not open connection to the host, on port 80: Conn ect failed Using netstat -an | find "80" on the dev server returns no connections on port 80 (a few on 1801, etc) tcpview confirms this, listing no open connections on port 80. The following services related to the Web role are running: World Wide Web Publishing Service Application Host Helper Service Microsoft FTP Service (ftp connections to port 21 are granted) Windows Process Activation Service The default website bindings are: Type Host Name Port IP Address Binding Information http 80 * net.tcp 808:* net.pipe * net.msmq localhost msmq.formatname localhost When setting up a new application under the default site, the test function passes both connection/authorisation only if the 'connect as' user is local admin, otherwise the test errors with 'invalid application path'. At no point is the W3SVC service PID bound to port 80 (it is running and bound to 21 for ftp). There are no W3SVC log directory at c:\inetpub\logs\LogFiles\ (only FTPSVC2), and no HTTPERR directory at c:\windows\system32\ or c:\windows\system32\logfiles\. There do not appear to be any related errors in the event logs. I'd really appreciate any thoughts on be a good place dig into what's (not) going on here!

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

  • How to add URL's to wiki (MediaWiki) powered documentation?

    - by Ian Boyd
    We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: vmrc://solo.avatopia.com:5901/Windows 2000 Server My first thought was to convert the URL into a link: [vmrc://solo.avatopia.com:5901/Windows 2000 Server] But the content renders literally as above: with the square brackets and all. Testing with other URL protocols: [http://solo.avatopia.com] [ftp://solo.avatopia.com] [ldap://solo.avatopia.com] [vmrc://solo.avatopia.com] Only the first two work, and are converted to hyperlinks. The other two remain as liternal text. How can i add URLs to MediaWiki powered documentation? Original Question We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: \\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/Windows 2000 Server If launching from a command prompt, you have to quote the spaces: C:\>"\\solo\VMRC Client\vmrc.exe" solo.avatopia.com:5901/"Windows 2000 Server" My first thought in converting the above for use on our wiki-site, is to simply HTML-ify it: file://\\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/&quot;Windows 2000 Server&quot; but MediaWiki only converts file://\solo\VMRC to a hyperlink, the remainder is text. i've tried other random things, including enclosing the URL in square brackets. What is the correct answer? i don't want to happen to randomly stumble on some format that happens to work today, and breaks in the future.

    Read the article

  • Windows 7 BSOD Crashes

    - by Shane Andrade
    I recently upgraded to Windows 7 64 doing a clean install from Vista 64 and ever since I keep getting random blue screen crashes. I have the feeling it's caused by my video card but everything has the most up-to-date drivers for Windows 7 64 bit. Here is the memory dump from my most recent crash: MEMORY_MANAGEMENT (1a) # Any other values for parameter 1 must be individually examined. Arguments: Arg1: 0000000000041790, The subtype of the bugcheck. Arg2: fffffa8001990b90 Arg3: 000000000000ffff Arg4: 0000000000000000 Debugging Details: ------------------ PEB is paged out (Peb.Ldr = 000007ff`fffd9018). Type ".hh dbgerr001" for details PEB is paged out (Peb.Ldr = 000007ff`fffd9018). Type ".hh dbgerr001" for details BUGCHECK_STR: 0x1a_41790 DEFAULT_BUCKET_ID: VISTA_DRIVER_FAULT PROCESS_NAME: csrss.exe CURRENT_IRQL: 0 LAST_CONTROL_TRANSFER: from fffff80002cff26e to fffff80002c8cf00 STACK_TEXT: fffff880`0299ae38 fffff800`02cff26e : 00000000`0000001a 00000000`00041790 fffffa80`01990b90 00000000`0000ffff : nt!KeBugCheckEx fffff880`0299ae40 fffff800`02cc05d9 : fffffa80`00000000 00000000`01e73fff 00000000`00000000 fffff960`0023653f : nt! ?? ::FNODOBFM::`string'+0x339d6 fffff880`0299b000 fffff800`02fa2e50 : fffffa80`09140c90 0007ffff`00000000 00000000`00000000 00000000`00000000 : nt!MiRemoveMappedView+0xd9 fffff880`0299b120 fffff960`002e381b : fffff900`00000000 fffffa80`07c85d10 00000000`00000001 fffff900`c1e56cd0 : nt!MiUnmapViewOfSection+0x1b0 fffff880`0299b1e0 fffff960`002b4fc1 : 00000000`00000000 fffff900`00000000 fffff900`c1e56cd0 00000000`00000000 : win32k!SURFACE::bUnMapImmediate+0x5b fffff880`0299b210 fffff960`002b527b : fffff900`c07fdd10 fffff8a0`00000000 00000000`00000000 00000000`00000000 : win32k!bMigrateSurfaceForConversion+0x5ad fffff880`0299b340 fffff960`002dc3e3 : fffff900`00000000 fffff900`c1e5c010 00000000`00000000 00000000`00000000 : win32k!pConvertDfbSurfaceToDibInternal+0x1cb fffff880`0299b420 fffff960`002b5319 : fffffa80`07c7f470 00000000`00000001 00000000`00000000 00000000`00000282 : win32k!MulConvertChildRedirectionDfbSurfaceToDib+0x53 fffff880`0299b460 fffff960`002b1267 : fffff900`c0132010 fffff900`c0132010 00000000`00000000 00000000`00000000 : win32k!pConvertDfbSurfaceToDib+0x41 fffff880`0299b490 fffff960`002b1b1f : fffff900`c0132010 00000000`00000001 fffff900`c24cc280 fffff900`c0132010 : win32k!bDynamicRemoveAllDriverRealizations+0x4f fffff880`0299b4c0 fffff960`00273bb9 : 00000000`00000000 fffff900`00000000 fffff900`00000000 00000000`00000000 : win32k!bDynamicModeChange+0x1d7 fffff880`0299b5a0 fffff960`000baa2d : 00000000`00000000 00000000`00000000 00000000`00000000 07cd8220`00000003 : win32k!DrvInternalChangeDisplaySettings+0xc7d fffff880`0299b7e0 fffff960`001a2c41 : 00000000`00000040 fffff900`c00bf010 00000000`00000000 07cd8220`00000003 : win32k!DrvChangeDisplaySettings+0x62d fffff880`0299b9c0 fffff960`001a2e9e : fffffa80`07cd8220 00000000`00000000 00000000`00000000 fffff800`02f6fec3 : win32k!xxxInternalUserChangeDisplaySettings+0x329 fffff880`0299ba80 fffff960`001a033a : 00000000`00000000 00000000`00000000 00998b21`81a100b6 00000000`00000040 : win32k!xxxUserChangeDisplaySettings+0x92 fffff880`0299bb70 fffff960`001a053a : 00000000`00000000 00000000`00000001 00000000`00000000 00000000`00000000 : win32k!xxxRemoteSetDisconnectDisplayMode+0x42 fffff880`0299bbb0 fffff960`00183ea6 : 00000000`00000000 fffffa80`06efeb60 fffff880`0299bca0 00000000`00000005 : win32k!xxxRemoteDisconnect+0x1c2 fffff880`0299bbf0 fffff800`02c8c153 : fffffa80`06efeb60 00000000`00000005 00000000`00000020 00000000`00000000 : win32k!NtUserCallNoParam+0x36 fffff880`0299bc20 000007fe`fd6b3d3a : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSystemServiceCopyEnd+0x13 00000000`027cf798 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : 0x7fe`fd6b3d3a STACK_COMMAND: kb FOLLOWUP_IP: win32k!SURFACE::bUnMapImmediate+5b fffff960`002e381b f6477401 test byte ptr [rdi+74h],1 SYMBOL_STACK_INDEX: 4 SYMBOL_NAME: win32k!SURFACE::bUnMapImmediate+5b FOLLOWUP_NAME: MachineOwner MODULE_NAME: win32k IMAGE_NAME: win32k.sys DEBUG_FLR_IMAGE_TIMESTAMP: 4a5bc5e0 FAILURE_BUCKET_ID: X64_0x1a_41790_win32k!SURFACE::bUnMapImmediate+5b BUCKET_ID: X64_0x1a_41790_win32k!SURFACE::bUnMapImmediate+5b Followup: MachineOwner

    Read the article

  • Windows 8 & Hyper-V Can't Bridge Wifi Connection

    - by xinunix
    So I have an odd issue that I can't quite figure out... I am running Windows 8 Enterprise on a Dell 6420 laptop. I have a Broadcom 802.11n wireless adapter. I am connected to an home router (Netgear WNDR3700) that is connected to the internet. It is a very simple home network setup. I am trying to stand-up a few VMs in Hyper-V and want the VMs to be able to access the internet over my wireless connection. I have found numerous examples of how to set this up using both External and Internal Virtual Switches but have yet to be able to get it to work on my machine. I have narrowed the issue down to the fact that my host machine always loses internet connection when I bridge my wifi connection (both when it is bridged automatically by windows when I setup an external virtual switch bound to the wifi adapter or if I do it manually by creating an internal virtual switch, right click on it and my wifi network and select "Bridge Connections".) In both cases after the bridge is established, my host machine can no longer connect to the internet. I am not sure where to start with troubleshooting this problem. After the bridge is setup, an ipconfig shows all netowrk devices on the machine as "Media Disconnected". I do know that the wireless adapter is connected to the router b/c it shows the connection as active and full-strength. The only thing I can possibly think of is that this machine also has the Cisco VPN client installed on it which installs a Cisco Virtual Network Adapter. Is it possible that this Cisco Virtual Adapter is causing me issues when I try to bridge? I saw some people had a similar issue with a VirtualBox virtual adapter when trying to share via Hyper-V. Any thoughts or suggestions on how to troubleshoot?

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • Solaris SMF to Upstart on RHEL6

    - by aaa90210
    I am planning a migration from Solaris/x86 to RHEL6. Part of this migration will be migrating services from SMF to the RHEL6 equivalent, which appears to be upstart. While init.d scripts still seem to be supported, I want to take advantage of a more sophisticated init daemon, especially for features like job supervision (restarting etc). I would like to gather some thoughts on a few points: 1) Is upstart an adequate job supervisor, i.e. does it preclude the need for stand-alone managers like daemontools/supervise? 2) Upstart scripts seem very bare-bones compared to a typical init.d script. If I was porting an init.d script to Upstart, is it OK to just "exec /etc/init.d/myjob start"? This include RHEL installed programs like httpd. 3) Does upstart do anything is regards to pid files, and what are it's expectations in regards to the forking model of the process? 4) Are there any straightforward guides to the process management aspect of Upstart...and by that I mean the conditions around controlling restarting? e.g. how many times to restart the process before it goes into a maintenance state, or to ignore errors/core dumps in child processes of the supervised process. Any other relevant ideas or guides would be appreciated. TIA

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Request bursting from web application Load Tests

    - by MaseBase
    I'm migrating our web and database hosting to a new environment on all new machines. I've recently performed a Load Test using WAPT to generate load from multiple distributed clients. The server has plenty of room to handle the traffic load, but I'm seeing an odd pattern of incoming traffic during the load tests. Here is the gist of our setup: Firewall server running MS Forefront TMG 2010 on Win 2k8 server Request routing done by IIS Application Request Routing on firewall machine Web server is a Hyper-V VM on the Database server (which is the host OS) These machines are hefty with dual-CPU's with six cores (12 total procs) Web server running IIS 7.5 Web applications built in ASP.NET 2.0, with 1 ISAPI filter (Url Rewrite) in front What I'm seeing during the load tests is that the requests all come through in bursts. Even though I have 7 different distributed clients sending traffic loads, the requests come through about 300-500 requests at a time. The performance monitor shows nearly all of the counters moving through this pattern, where a burst of requests comes in the req/sec jumps to 70, the queued requests jumps to 500, the current requests jumps up, the CPU jumps up, everything. Then once it's handled that group of requests, it has a lull for nearly 10 seconds where nearly nothing is happening. 0-5 req/sec, 0 queued requests, minimal CPU usage. Then after 10 seconds of inactivity, another burst comes through, spiking all of the counters once again. What I can't figure out is why the requests are coming through in bursts when I know that the load being generated is not sent that way, especially considering the various load-generating clients sending traffic all in different intervals with random think time's between each request. Is there something in the layers between Hyper-V or perhaps in the hardware which might cause this coalesce of requests together? Here is what i'm looking at, the highlighted metric is Requests/sec, but the others critical counter go with it: Requests Queued (which I'd obviously like to keep as close to 0 as possible). Any ideas on this?

    Read the article

  • needing storage integrity (write/read) test - for BASH

    - by Mr. Bash
    In need of shell scripts / bash commands to verify data integrity of local harddrives, usb-drives, etc, ... Like the famous www.heise.de/download/h2testw; or something that is at least common within repositories. (h2testw writes a specific datastring over and over onto the medium, then reads it again to verify if it was written correctly and displays write/read time/speed.) please no dd if=/dev/random of=/dev/sdx bs=1k && dd if=/dev/sdx of=/dev/null bs=1k since it won't verify if everything was written correctly. It is only a test if read/write is successful to the device. So far, I'm not too happy with badblocks -w -v /dev/sdx1 either, since it seems rather slow and I don't know what it exactly writes, and if it considers wear-leveling on flash media. There is also a program named F3 http://oss.digirati.com.br/f3/ that needs to be compiled. Designed after h2testw, the concept sounds interesting, i'd just rather have it as a ready to go bash script.

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >