Search Results

Search found 33468 results on 1339 pages for 'behaviour change'.

Page 424/1339 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Windows 7 BSOD when changing power plan

    - by dd5
    i have a strange problem. When i want to change the power plan on my laptop from High performance to Balanced, Windows freezes and i get bsod. The power plan settings are all default. Laptop specs: - Intel Core i3 330M/350M - Intel® HM55 Express Chipset - DDR3 1066 MHz SDRAM 8GB - ATI Mobility™ Radeon HD5730 1GB DDR3 VRAM - Intel SSD330 128gb - Windows 7 Home premium I've searched the internets but couldnt find a similar issue. BSOD first started when i installed this SSD and stopped when i've updated the chipset controller driver then started again yesterday when i wanted to change the power settings plan.Minidump file here. Any help with this weird issue appriciated, thanks. Edit: - i've ran Memory diagnostic tool, - Intel SSD diagnostics - and updated the firmware to 3.2.1. Non of these steps worked or shown signs of errors - but still got BSOD when changing power plan settings. After analizing the dump file via osronline.com here a first few lines: CRITICAL_OBJECT_TERMINATION (f4) A process or thread crucial to system operation has unexpectedly exited or been terminated. Several processes and threads are necessary for the operation of the system; when they are terminated (for any reason), the system can no longer function. Arguments: Arg1: 0000000000000003, Process Arg2: fffffa8008661b30, Terminating object Arg3: fffffa8008661e10, Process image file name Arg4: fffff800033de270, Explanatory message (ascii) -- Solution -- Provided by Vinayak: After installing the Intel Rapid storage Technology from MajorGeeks, i didn't experience a BSOD since, thank you :)

    Read the article

  • vim coloring for git

    - by kelloti
    I'm on Windows and my vim loads with a terrible colorscheme with vim. The message is blue on black (so I can't see what I'm typing). I need to change the colorscheme, but :colorscheme slate doesn't do anything. :version vim - vi improved 7.3 (2010 aug 15, compiled oct 27 2010 17:51:38) ms-windows 32-bit console version included patches: 1-46 compiled by bram@kibaale big version without gui. features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +gettext/dyn -hangul_input +iconv/dyn +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +persistent_undo -postscript +printer -profile -python -python3 +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl -tgetent -termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -xfontset -xim -xterm_save -xpm_w32 system vimrc file: "$vim\vimrc" user vimrc file: "$home\_vimrc" 2nd user vimrc file: "$vim\_vimrc" user exrc file: "$home\_exrc" 2nd user exrc file: "$vim\_exrc" compilation: cl -c /w3 /nologo -i. -iproto -dhave_pathdef -dwin32 -dfeat_cscope -dwinver=0x0400 -d_win32_winnt=0x0400 /fo.\objc/ /ox /gl -dndebug /zl /mt -ddynamic_iconv -ddynamic_gettext -dfeat_big /fd.\objc/ /zi linking: link /release /nologo /subsystem:console /ltcg:status oldnames.lib kernel32.lib advapi32.lib shell32.lib gdi32.lib comdlg32.lib ole32.lib uuid.lib /machine:i386 /nodefaultlib libcmt.lib user32.lib /pdb:vim.pdb -debug My $HOME\_vimrc looks like colorscheme slate syn on set shiftwidth=2 set tabstop=2 and my $VIM\vimrc is the stock vimrc that comes with the Windows Vim distribution. How do I change my console Vim colorscheme? Especially for Git commits.

    Read the article

  • Outlook 2007 message formatting - pasted images

    - by Jack
    When you cut and past an image into a message window when composing a new email, the image will display as you would expect and formatting the image appears straight forward, However the pain happens when you click send. The recipient notices that the image will resize with the size of there outlook window. The original image size is ignored and no scrollbars appear. Howe do you stop this behaviour. When said image is pasted, say you want to place a graphic on top of the image such as an arrow. By using the ribbon, selecting the insert tab and choosing shapes, you go ahead and select the arrow shape and plonk it on to of the image, just where you want it, give it a nice colour and then send the email. As the recipient resizes there outlook message window, the image resizes but the shape remains where it was, now who wants that micros*a*ft! So, how do you A) make the shape resize with the image, so the shape stays where I put it in relation to the image, and b) stop the image resizing in the first place.

    Read the article

  • MacOS creates a new mount on AFP path calls

    - by jAndy
    Hi Folks, following scenario: In my webapp, my customers are using Firefox as target browser. They have the need to open afp:// folders via Javascript. To make a long story short, this really works. You need to setup Firefox with about:config and set the value network.protocol-handler.external.afp to true. What happens then, the operating system (OSX) takes care of that path and it correctly opens a Finder window. The problem: OSX does create a new mount every time. It cannot distinct between afp://host/path/111 and afp://host/path/222 for instance. Furthermore, even if the afp path is 100% identical a new mount is created. It looks like this is the default behavior from OSX regardless of Firefox. So, is there any chance I can tell OSX not to create a new mount for some sub directorys which should get access over afp:// ? update: It looks like, there are OSX applications which can change the default behavior for network protocols. So you can change "somewhere" which application OSX should call for a protocol. If that is true, wouldn't it be possible to create a script which just opens the local path without a afp:// prefix ? The question here is, where is that configuration (?) to tell OSX which application to use for specific protocol. Any help welcome!

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Setup proxy with Apache 2.4 on Mac 10.8

    - by Aptos
    I have 1 application (Java) that running on my local machine (localhost:9000). I want to setup Apache as a front end proxy thus I used following configuration in the httpd.conf: <Directory /> #Options FollowSymLinks Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order deny,allow Allow from all </Directory> Listen 57173 LoadModule proxy_module modules/mod_proxy.so <VirtualHost *:9999> ProxyPreserveHost On ServerName project.play ProxyPass / http://127.0.0.1:9000/Login ProxyPassReverse / http://127.0.0.1:9000/Login LogLevel debug </VirtualHost> ServerName localhost:57173 I change my vim /private/etc/hosts to: ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1:9999 project.play and use dscacheutil -flushcache. The problem is that I can only access to localhost:57173, when I tried accessing http://project.play:9999, Chrome returns "Oops! Google Chrome could not find project.play:9999". Can somebody show me where I were wrong? Thank you very much P/S: When accessing localhost:9999 it returns The server made a boo boo.

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • Anonymous FTP upload on CentOS 5.2

    - by Craig
    I need to allow users to upload files to an FTP server anonymously. They should not be able to see any other files, or download files. It is a CentOS 5.2 server. I have a separate partition for the the upload area (mounted at /ftp). I have tried to set up vsftpd, followed all the instructions/advice I could find. But, when a user logs in and tries to transfer a file it throws a "553 could not create file." error. If I do a 'pwd' it shows the directory as "/" rather than the anon_root of "/ftp/anonymous". Any attempt to change the remote directory ends with "550 Failed to change directory.". I have a subdirectory "/ftp/anonymous/incoming" that is writable for the uploads SELinux is in permissive mode. I am running version 2.0.5 release 16.el5 of vsftpd. Here is the vsftpd.conf file: anonymous_enable=YES local_enable=YES write_enable=YES local_umask=002 anon_umask=007 file_open_mode=0666 anon_upload_enable=YES anon_mkdir_write_enable=NO dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=inftpadm xferlog_std_format=YES nopriv_user=nobody listen=YES pam_service_name=vsftpd userlist_enable=YES tcp_wrappers=YES ftp_username=inftpadm anon_root=/ftp/anonymous anon_other_write_enable=NO anon_mkdir_write_enable=NO anon_world_readable_only=NO dirlist_enable=YES Can anyone help?

    Read the article

  • add_header directives in location overwriting add_header directives in server

    - by user64204
    Using nginx 1.2.1 I am able to add multiple headers using add_header as follows: server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name1: Value1 Name2: Value2 However I soon as I use the add_header directive inside location, the other add_header directives under server are ignored server { listen 80; server_name localhost; root /var/www; add_header Name1 Value1; <=== HERE add_header Name2 Value2; <=== HERE location / { add_header Name3 Value3; <=== HERE add_header Name4 Value4; <=== HERE echo "Nginx localhost site"; } } GET / HTTP/1.1 200 OK Name3: Value3 Name4: Value4 The documentation says that both server and location are valid context and doesn't state that using add_header in one prevents using it in the other. Q1: Do you know if this is a bug or the intended behaviour and why? Q2: Do you see other options to get this fixed than using the HttpHeadersMoreModule module?

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • What kernel modules are required for wi-fi to work?

    - by Leonid Shevtsov
    My custom-built 2.6.32 kernel cannot connect to any WPA-protected network. The kernel includes (probably?) everything that should be needed for wifi, including IPv4 network support (IPv6 is disabled), the ath5k wireless driver (which is used in the generic Ubuntu 2.6.31 kernel) and all crypto APIs. The card is being detected, however, iwlist scan returns wlan0 Failed to read scan data : Network is down and network-manager log says <info> (wlan0): driver supports SSID scans (scan_capa 0x01). <info> (wlan0): new 802.11 WiFi device (driver: 'ath5k') <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/1 <info> (wlan0): now managed <info> (wlan0): device state change: 1 -> 2 (reason 2) <info> (wlan0): bringing up device. <info> (wlan0): preparing device. <info> (wlan0): deactivating device (reason: 2). supplicant_interface_acquire: assertion `mgr_state == NM_SUPPLICANT_MANAGER_STATE_IDLE' failed <info> modem-manager is now available <WARN> default_adapter_cb(): bluez error getting default adapter: The name org.bluez was not provided by any .service files <info> Trying to start the supplicant... <info> (wlan0): supplicant manager state: down -> idle <info> (wlan0): device state change: 2 -> 3 (reason 0) <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. The exact same configuration works with the generic kernel. Is anything except wifi and crypto api needed for wi-fi to work?

    Read the article

  • Upstart can't determine my process' pid

    - by sirlark
    I'm writing an upstart script for a small service I've written for my colleagues. My upstart job can start the service, but when it does it only outputs queryqueue start/running; note the lack of a pid as given for other services. #/etc/init/queryqueue.conf description "Query Queue Daemon" author "---" start on started mysql stop on stopping mysql expect fork env DEFAULTFILE=/etc/default/queryqueue umask 007 kill timeout 30 pre-start script #if [ -f "$DEFAULTFILE" ]; then # . "$DEFAULTFILE" #fi [ ! -f /var/run/queryqueue.sock ] || rm -rf /var/run/queryqueue.sock #exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script script #Originally this stanza didn't exist at all if [ -f "$DEFAULTFILE" ]; then . "$DEFAULTFILE" fi exec /usr/local/sbin/queryqueue -s /var/run/queryqueue.sock -d -l /tmp/upstart.log -p $PIDFILE -n $NUM_WORKERS $CLEANCACHE $FLUSHCACHE $CACHECONN end script post-start script for i in `seq 1 5` ; do [ -S /var/run/queryqueue.sock ] && exit 0 sleep 1 done exit 1 end script The service in question is a python script, which when run without error, forks using the code below right after checking command line options and basic environmental sanity, so I tell upstart to expect fork. pid = os.fork() if pid != 0: sys.exit(0) The script is executable, and has a python shebang. I can send the TERM signal to the process manually, and it quits gracefully. But running stop queryqueue claims queryqueue stop/waiting but the process is still alive and well. Also, it's logs indicate it never received the kill signal. I'm guessing this is because upstart doesn't know which pid it has. I've also tried expect daemon and leaving the expect clause out entirely, but there's no change in behaviour. How can I get upstart to determine the pid of the exec'd process

    Read the article

  • Window 7 Host does not answer to ping

    - by gencha
    Today I tried printing on a shared printer on one of our homegroup members. Sadly it did not work (printer marked as offline). Shortly after, I noticed I can't even ping the machine that owns the printer (I also can not remotely access it in any other way I've tried). Currently I'm trying to ping the machine from the router both computers are connected to (and my machine in question doesn't answer). I do receive the echo requests (as verified with WireShark). I also added a rule in the Windows Firewall to specifically allow ICMP echo requests, but that didn't change anything. I also tried netsh firewall set icmpsetting 8 enable, but that didn't change anything either. Completely disabling the Windows Firewall has no effect on the issue either. One has to wonder, where does Windows log when and why it ignored any incoming packets? How can I get to the bottom of this? Here are some ways I found to dig deeper into the issue: Enabling logging on the Windows Firewall Enabling Windows Filtering Platform Auditing Both methods at least give more insight into the issue. The plain log file is full of entries like this: 2011-11-11 14:35:27 DROP ICMP 192.168.133.1 192.168.133.128 - - 84 - - - - 8 0 - RECEIVE So the ICMP packets are being dropped as if that was intended. The Event Viewer now gives a little bit more details: The Windows Filtering Platform has blocked a packet. Application Information: Process ID: 4 Application Name: System Network Information: Direction: Inbound Source Address: 192.168.133.1 Source Port: 0 Destination Address: 192.168.133.128 Destination Port: 8 Protocol: 1 Filter Information: Filter Run-Time ID: 214517 Layer Name: Receive/Accept Layer Run-Time ID: 44 This same entry is always repeated with 2 points of information changing: Process ID: 420 Application Name: \device\harddiskvolume2\windows\system32\svchost.exe The service host with the PID 420 is the host for the following services: Windows Audio DHCP Client Windows Event Log HomeGroup Provider TCP/IP NetBIOS Helper Security Center Additionally, there is currently this problem with the same machine: Even though my network is set to be a "Home network", I am unable to create a new homegroup.

    Read the article

  • Connect to Apache times out randomly

    - by Amadan
    We are trying to set up an Apache server on a remote machine, but we experience strange behaviour. Checking with telnet remote.machine 80, one of these things happen randomly: Connect and serve content normally (no delay) Connect after a long pause Connect normally, then time out without response Timeout on connect Once connected, the request seems to be processed normally. These things do not occur if I connect from that machine directly to localhost 80. The Apache is dedicated, as is the server it runs on (runs only this one application, no-one else is using it for anything else). I am not an administrator of the remote site, and I do not know the network architecture over there, but apparently it's firewalled: (HTTP port is open, SSH port is IP-restricted, most others are closed). If there was any one pattern, I might have some ideas, but this variety of symptoms baffles me. Any ideas as to what could be causing this? Apache is 2.2; Server version is: Linux version 2.6.9-22.ELsmp ([email protected]) (gcc version 3.4.4 20050721 (Red Hat 3.4.4-2)) #1 SMP Mon Sep 19 18:32:14 EDT 2005

    Read the article

  • IIS7 ASP.NET application - 2 identical apps in 2 identical app pools, 1 is responsive and 1 is not

    - by Ben
    I have an ASP.NET (v4.0) web app that is installed in a virtual directory (as an application) and is hosted in it's own app pool. This is repeated for each instance of the app (i.e. per customer). The app pools are integrated (not classic) mode and LoadUserProfile is set to true. Otherwise, default settings. Each instance currently has it's own copy of the code/config, and it's own data folder (basic file read/writes). 1 instance of this app runs well (operation used for comparison takes ~4 seconds). Every other instance runs slowly (from 10-25 seconds for the same operation). If I move the slower instance to the "fastest" app pool that instance springs to life. If I move the faster instance into the slower app pool that instance slows to a crawl. The app pools were created in the same way initially - manually. I later used the powershell copy routine to ensure an exact copy of the faster app pool and still the same behaviour. Comparing the apppool.config files shows they are identical barring the virtual directory assignments. There are no shared resources that are being blocked, so far as I can tell, and I tested that by shutting down the performant app pool and restarting... slow is still slow, and then when I restart that app pool (so it's loaded last) it's still faster...

    Read the article

  • FWBuilder DNS Object Run Time - when exactly does it resolve the DNS name?

    - by Jakobud
    In Firewall Builder, when you use the DNS Object and set it to run time, when exactly does the firewall (iptables in our case) actually resolve the DNS name? Is it whenever a call is made to that DNS name in the firewall? So the firewall would resolve the name on the fly whenever someone/something tries to access that DNS name? Or is it when you execute the fw script to load the rules into iptables? So in this case, it would resolve the DNS name that one time and then hard-code the resulting ip address into the iptable rules? From what I read, I think its #1, but it's just not 100% clear to me. We have two servers for a certain function on our network. One is the primary server and one is backup. alpha0.domain.com alpha1.domain.com In DNS we have this: alpha.domain.com -> alpha0.domain.com If the primary server goes down and we need to switch to the backup, I just change our local DNS record to point to alpha1.domain.com instead. So back to the firewall, if I just put in a Domain Object as alpha.domain.com, do I have to reload the firewall rules every time we switch to the backup alpha server and change the DNS record? Or will the firewall automatically resolve to the correct address even after the switch?

    Read the article

  • Apache Server not working in MAMP

    - by jasonaburton
    Here's what I did before the problem started: I was creating a database for a site that I am working on in phpMyAdmin. I wrote some code to try to connect to the database I just created and I couldn't connect. I assumed it might be because I needed a password to connect to the database, so I created a password for it. Immediately after I created the password phpMyAdmin kicked me out saying: "#1045 - Access denied for user 'root'@'localhost' (using password: YES)" "phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server." I found the php.ini file and searched for where I could change the password to match the one I just made, but couldn't find where I needed to change it. So I decided to scrap the database and uninstall MAMP from my computer and reinstall it hoping it would just reset all the defaults and I could go on my merry way. But now after reinstalling MAMP and trying to run the servers Apache won't start up and I have no idea why. One problem after another... Any advice or helpful ideas?

    Read the article

  • How to fix IE starting when I try to start Opera?

    - by Scott Leis
    I have a problem that started today, where trying to run any of the 3 versions of Opera I have installed causes Internet Explorer to start instead. The Opera versions I have installed are 9.64, 10.10, and 10.63. My OS is Windows Vista with the latest critical/important updates. The behaviour is the same regardless of whether I double-click a shortcut to Opera, double-click on Opera.exe in one of the install directories, or run Opera.exe with the full path from Start-Run. The only work-around I've found is to right-click Opera.exe or a shortcut, and click "Run as administrator". This starts Opera, and it appears to work normally. Opera seems to be the only program so affected. I've checked Firefox and a few other (non-browser) programs, and they work normally. When IE starts instead of Opera, there are always two instances of iexplore.exe started, and they both crash before I can do anything else. IE also crashes if I try to start it from its own shortcut, but I don't know if that only started today, since I hadn't used IE for a few weeks. Does anyone know what might cause this and how to fix it? It's possible my PC has a virus, but BitDefender (for which I have automatic updates enabled) hasn't detected anything.

    Read the article

  • How to clone a HDD and then use the clone with VMware (so that Windows works!)?

    - by Ahmad
    I have a system on which Windows 7 is installed, and I am trying to make a clone of its HDD image, which I then want to use in my main PC with VMware, so that I can boot Windows 7 off the cloned HDD. I used Ultimate Boot CD v5.1.1 with the system whose HDD I wanted to clone, and I cloned it using EaseUs Disk Copy, which comes with Ultimate Boot CD. The source HDD was 250 GB in size which had 3 partitions, while the USB HDD I attached to the system, which was supposed to be the destination/clone HDD, was 320 GB in size. I chose to create an exact replica, and so 250 GB worth of data (partitions, etc.) was copied exactly, and the rest of the space was un-allocated. I now connected this USB HDD to my main PC, fired up VMware Workstation 8 and defined a new Virtual Machine, and chose to boot off the USB HDD. Result is that when Windows is booting (from the cloned HDD inside VMware), I get the blue screen error before I reach the login screen. How can I change my methodology so that Windows even boots from the clone? I can change any tools I use, etc.

    Read the article

  • Running Visual Studio 2010 in a University Campus

    - by Woondows
    We have just installed Windows 7 Enterprise x64 in one of our computer labs being used by students for programming. However, when we installed Visual Studio 2010 Ultimate on the machines, we found that to even launch the application (devenv.exe), required the student to enter the administrator password (the usual UAC prompt). Of course, we could just turn off UAC, but that would defeat the purpose of having it in Windows 7. On the other hand, we cannot really give the students local administrator privilege, as we are concerned that they will do some malicious stuff on the computers. Previously when we used Windows XP Professional running Visual Studio 2005, we had no problems. Kindly advise if there's any workaround for this. EDIT: Thanks for the answer guys. Mayank, your links may work for Visual Studio .Net, but it doesn't seem to work for Visual Studio 2010. Ryan, Tieson, I'm intrigued that you guys managed to get it working easily. FYI I don't manage the Group Policies, but I can get them changed if necessary. Any particular GP that I should be looking at? Suggestions to how to troubleshoot further why UAC is being invoked? At least now I know for sure that this is not supposed to be the default behaviour for Visual Studio 2010 so I'm going to keep digging for a solution. Will try running Procmon and see if i can find something..

    Read the article

  • Sudden and frequent hangs on desktop computer: mobo or CPU fault?

    - by djechelon
    I have a desktop computer equipped with an ASUS Crosshair 2 Formula and a Phenom x6 3.2GHz CPU. My problem is that often the computer will hang all of a sudden, completely stopping responding. When that occurs, reset key is inoperative and power button turns the computer off but is unable to turn it back on. I have to physically disconnect power cable. The problem can occur anytime, when I'm booting Windows, when I'm logging in, when I'm listening to a song, when I'm browsing Internet, etc. It always occurs after very few minutes of 3D gameplay I thought it was a video card fault. I had 3 8800GTX so I could try all combinations of them: didn't fix I thought it was a RAM problem: I tried running with only a subset of my DDR2 banks but didn't fix. Almost every time I have to reset and reconfigure BIOS (without AHCI, Win7 won't boot, so I need to restore a few things). If I enable AMD Live, Cool&Quiet or other things from CPU configuration menu I'll be sure that the computer won't reach Windows desktop in 99% of cases (it randomly hangs somewhere in the boot process or even in the BIOS POST). Another interesting thing is that during the POST process the computer always takes unusually long time detecting USB devices (LCD POSTer shows USB INIT), and I've also tried disconnecting all USB devices but didn't take less time to POST BIOS revision is 2702, the latest. Today I found a different behaviour once: during boot screen I got a BSOD with error Stop 0x00000101 A clock interrupt was not received on a secondary processor within the allocated time interval, and this is usually related to overclocking, but I never overclocked my CPU. Judging from the description of my problem, hoping someone had the same and fixed, and since I don't have a spare CPU or motherboard for replacement, I'd like to ask if you think this is a problem with faulty CPU or faulty motherboard, and if I can perform additional tests (I mean software tests because of my lack of spare components) to identify the component to replace.

    Read the article

  • Windows Server 2008 R2 running at a snail's pace

    - by Django Reinhardt
    Really weird problem here. Our main web server has started running at a snail's pace, for absolutely no reason we can discern. Even after restarting the machine, when there's no little or no ram usage and CPU usage is fluctuating between 0 and 30%, simple tasks, like opening Internet Explorer, or waiting for My Computer to open, take forever. There are no processes hogging system resources that we can see... the machine itself is just exhibiting extremely slow behaviour. I've never seen a machine do this. A lot of security updates had built up, so we decided to let Windows install them. When we looked through the history upon restarting, though, they had failed with error code 800706BA. I don't know if this could be related or not. Any help in this matter would be greatly appreciated. As mentioned in the title, we're running a Windows Server 2008 R2 machine. It's also running SQL Server and IIS. It has 16GB of RAM and a decent Quad Core processor. It's also been fine until now -- and we haven't changed a thing. Thanks for any help.

    Read the article

  • Prevent Amazon EC2 Time zone from reverting back on yum update

    - by D.Tate
    I use an Amazon EC2 server instance that runs a distro called Amazon Linux AMI. (I've read that it is based on CentOS/Red Hat). My specific version is the 2012.09 release. Anyway, I was able to change the time zone about a week ago from the default UTC to America/New_York (which is EST/EDT). The command I used to change it was: ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime ...thanks to this other Server Fault question. At that point, I was able to run date from the the command line, and it correctly displayed the EDT time. And even after EDT "fell back" to EST this past Sunday, I was pleased to find that running date still produced the correct local time. So that was great. However, after running a yum update yesterday, it seems that my time zone got reverted back to plain 'ol UTC. I even checked the last modified time of /etc/localtime file, and indeed it confirmed that it had been modified around the same time I had updated. Is there any way to prevent this from happening again, or will I be stuck resetting the time zone every time I do a yum update?

    Read the article

  • Changing open-files-limit in mysql 5.5

    - by davidv
    I'm having an issue with mysql 5.5 running on Ubuntu 12.04 with the open-files-limit parameter. I recently noticed some problems due to the 1024 limit, and actually the main system limit was set to 1024, so I modified /etc/security/limits.conf with the following: * soft nofile 32000 * hard nofile 32000 root soft nofile 32000 root hard nofile 32000 After that I check the ulimit value for root and even for mysql user, both returned the new value: 32000, so I assume the change has already been done. I also changed the value at the my.cnf file, setting open-files-limit to 24000, like this: open-files-limit = 24000 Now comes the odd part, when I restart the mysql service and check the open_files_limit variable, it returns that it's still set to 1024, so I'm having the same problems that before (obviously), I tried to use open-files-limit instead open_files_limit in the my.cnf config file, same result, BUT if I override the service command to start the service and start only using mysqld (no additional parameters), the service starts and when I check the parameter it returns 32000... I don't know where it's taking that value from, as it's not set at my.cnf and it's not being given through command line, at least, not for myself. Any ideas about why it's not working the change and how to solve it the normal way (launching it through service...)?

    Read the article

  • Self-connecting printers

    - by Martin Cerny
    Hello, I work as an administrator in a small company using XP Professional on all computers and two servers with Win 2003 Server. Recently a very unusual problam occured one of the computers keeps connecting to all the printers on the network it doesn't matter if it's an administrator or Domain User as soon as somebody logs in the commputer connects all the printers. The printers are either installed on local computers or on the server and shared. There is no log-on script connecting the printers, I install them manualy and none of the other computers shows such behaviour. We have a printer which is installed on two computers and both of them share it (I'm moving it to Server from a small PC which shared it up to now, but some computers still use the old connection), meaning this specific computer connects to one of the printer two times and it can't use either of the connections. How to prevent this self-connecting to all printers (none of the other computers has this problem). If I delte them from the "Printers" folder everything works fine untill I reconnect and the Folder is once again full of all the printers we have. I solved the smaller problem, computer is now capable of printing on all of the printers (it seems there have been some registry issues), after cleaning the registry and reinstalling the printer it seems to work just fine. But the second thing prevails, the computer connects to all the printers in the network (when I remove one/multiple it is reconnected right after the next log-in by any user).

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >