Search Results

Search found 5464 results on 219 pages for 'effect'.

Page 160/219 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Microsoft Mouse and Keyboard Center - Slow response for App-specific shortcuts

    - by Darrel Hoffman
    So a few months ago, I bought a new MS mouse, and was surprised that they'd discontinued Intellipoint in favor of this Microsoft Mouse and Keyboard Center. It seems to have the same functionality underneath all the bloat, but there's a very serious drawback - when I set up application-specific functions for the extra buttons on the mouse, they work, but sometimes with a very long delay, like up to a minute or more. For example, I often set up the left side button as an "Undo" in various programs for convenience. But sometimes, when I try to use that Undo button, nothing happens, so I'm forced to use the standard Ctrl-Z or whatever. But then, a minute or so later, it suddenly remembers that I hit that button a while back, and calls the Undo unexpectedly on something entirely different. It's infuriating. No modern computer function should be this slow. It's not the software or the computer itself, because doing an Undo via Ctrl-Z or the menu still works instantly. It's very definitely a side-effect of delayed response to the mouse button. Usually after it delays the first time, it'll work quickly after that, but if you haven't used a given shortcut in several minutes, it "forgets" again and you get another inexplicably long delay. Intellipoint never had this problem, but it's not supported any more, and not compatible with the newer mice. Has anyone else noticed slow-downs with MS M&K C and app-specific shortcuts? Any ideas how to get around this? I use these shortcuts extensively in my workflow and it's just entirely unacceptable to have such a long delay in what should be a pretty basic feature.

    Read the article

  • ssh many users to one home

    - by filippo
    Hiya, I want to allow some trusted users to scp files into my server (to an specific user), but I do not want to give these users a home, neither ssh login. I'm having problems to understand the correct settings of users/groups I have to create to allow this to happen. I will put an example; Having: MyUser@MyServer MyUser belongs to the group MyGroup MyUser's home will be lets say, /home/MyUser SFTPGuy1@OtherBox1 SFTPGuy2@OtherBox2 They give me their id_dsa.pub's and I add it to my authorized_keys I reckon then, I'd do in my server something like useradd -d /home/MyUser -s /bin/false SFTPGuy1 (and the same for the other..) And for the last, useradd -G MyGroup SFTPGuy1 (then again, for the other guy) I'd expect then, the SFTPGuys to be able to sftp -o IdentityFile=id_dsa MyServer and to be taken to MyUser's home... Well, this is not the case... SFTP just keeps asking me for a password. Could someone point out what am I missing? Thanks a mil, f. [EDIT: Messa in StackOverflow asked me if authorized_keys file was readable to the other users (members of MyGroup). Its an interesting point, this was my answer: Well, it wasn't (it was 700), but then I changed the permissions of the .ssh dir and the auth file to 750 though still no effect. Guess it's worth mentioning that my home dir ( /home/MyUser) is also readable for the group; most dirs being 750 and the specific folder where they'd drop files is 770. Nevertheless, about the auth file, I reckon the authentication would be performed by the local user on MyServer, isn't it? if so, I don't understand the need for other users to read it... well.. just wondering. ]

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: How do I remove the option to eject SATA drives from the Windows 7 tray icon?

    Read the article

  • Configuring two nearby WLANs: should I use the same ssid?

    - by Rory
    I'm configuring a home network for basic internet use (ie don't really need connectivity between workstations on the network). My brick walls mean a single wireless router doesn't provide good coverage throughout the house, so I have purchased two powerline adapters and now have the incoming modem/wireless router at one end of the house plugged into a powerline adapter, and at the other end of the house the other powerline adapter plugged into another wireless router. Currently the two wireless networks have different ssids. (The powerline adapters only do power-Ethernet; they're not wireless access points themselves.) This works well, except when I move between rooms and would ideally like my devices (iPad, phones, laptops) to switch from the weak to the strong signal. Sometimes there's enough signal that they hold on to the weak connects instead of switching to the strong one. Should I name the two networks the same ssid, and if so what is the actual effect? Do the signals get confused, is the bandwidth affected, will this help my devices seamlessly move from one to the other, or is the ssid just a cosmetic thing that actually doesn't have any impact on this situation? Are there any other settings that I should configure to make my setup optimal?

    Read the article

  • Configuring nginx to check for hard files in only a few directories,

    - by Evan Carroll
    For a node.js project I'm doing, I have a tree like this. +-- public ¦   +-- components ¦   +-- css ¦   +-- img +-- routes +-- views Essentially, I have the root to be set to public. I want all requests destined to /components/ /css/ /img/ To check to see if their appropriate destinations exist on disk. However, I don't want requests to other directories to even run an IO operation, /foo/asdf /bar /baz/index.html None of those should result in the disk being touched. I have a stansa that does the proxy to node.js, location @proxy { internal; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-NginX-Proxy true; proxy_pass http://localhost:3030; proxy_redirect off; } I just would like to know how to arrange this. My problem would be easily solved if try_files took a single argument, but it always wants a file first. location /components/ { try_files $uri, @proxy } location /css/ { try_files $uri, @proxy } location /img/ { try_files $uri, @proxy } However, there is nothing that I can find that will give me, location / { try_files @proxy } How do I get the effect I want?

    Read the article

  • How do I lower the hardware volume? (volume too high)

    - by Zom-B
    I have a 4yo Dell laptop with Windows XP Pro (modern ones unfortunately don't have a physical volume knob), and lately I'm using my Apple earphones, because they have much better low frequency response than my $10 earphones. They also have the side effect of being much louder. To give an indication of my agony, for most tasks (movie, music, games) I have my main volume at 3 ticks: drag to 0 with the mouse and press the up key 3 times (the handle does not even raise 1 pixel) and my wave volume at 50%. I notice that when I do this, I have a lot of digital noise, because I'm using just a tiny fraction of the 16-bit space. If I drag the Wave slider down until I barely hear the audio, it becomes really distorted and noisy, indicating that this is digital volume (in the DirectSound driver or something) and not hardware volume. I experimented in Audition. When I make a tone of 1000Hz at -50db, (all windows volumes at max) the volume is just below my pain threshold. When I zoom in to see how high the sample values reach, I see that just 8 of the 16 bits are used (about -100 ~ 100). When I generate such tone at -80db (minimum I can specify) then I can still clearly hear the tone, although really noisy. When I zoom in, I see that just 3 out of 16 bits are used. I created a squarewave tone that is just 1 bit high, and I can still hear it! For most uses, this is not a big problem (audiophiles will disagree!), as I just have more noise than usual (about the same as old 8 bit hardware), but I'm also in the process of programming a hearing test program, in which case this problem is a death blow as the test subjects will even hear tones at the bottom of the theoretical range (lowering the windows volume is futile, see above) (I cannot update drivers, as Dell has discontinued XP support for my model)

    Read the article

  • Spotlight has stopped indexing/returning anything in /Applications

    - by pra
    After a recent kernel panic & restart, Spotlight no longer seems to know anything about the files under my /Applications folder. I used to launch Safari.app, Opera.app, Textedit.app, etc via Spotlight as a matter of routine. Now, I get "No results found" for all of them (except Textedit.app, which launches a demo text editor from a Qt installation). The programs are still there & still launch directly from Finder. I've already run disk utility & verified the disk, no issues. I repaired disk permissions, which made several changes, but to no effect. Is there anything else I can do, short of re-installing MacOS? Update: I already verified that "Applications" was still checked in my Spotlight preferences. It was still returning applications located elsewhere (the Qt textedit sample app), so that shouldn't have been the issue. A few hours later it resolved itself; I guess there's a background process running on some interval.

    Read the article

  • SQL Server Read Locking behavior

    - by Charles Bretana
    When SQL Server Books online says that "Shared (S) locks on a resource are released as soon as the read operation completes, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared (S) locks for the duration of the transaction." Assuming we're talking about a row-level lock, with no explicit transaction, at default isolation level (Read Committed), what does "read operation" refer to? The reading of a single row of data? The reading of a single 8k IO Page ? or until the the complete Select statement in which the lock was created has finished executing, no matter how many other rows are involved? NOTE: The reason I need to know this is we have a several second read-only select statement generated by a data layer web service, which creates page-level shared read locks, generating a deadlock due to conflicting with row-level exclusive update locks from a replication prcoess that keeps the server updated. The select statement is fairly large, with many sub-selects, and one DBA is proposing that we rewrite it to break it up into multiple smaller statements (shorter running pieces), "to cut down on how long the locks are held". As this assumes that the shared read locks are held till the complete select statement has finished, if that is wrong (if locks are released when the row, or the page is read) then that approach would have no effect whatsoever....

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • Sending e-mail on behalf of our customer(s), with Postfix

    - by NathanE
    We send e-mail on behalf of our customers, via our own SMTP services. It's always been a problem for us because usually our "spoofing" of their source address results in the mails being caught in spam traps. This hasn't been a problem in the past due to the small volume and low importance of these mails that we sent. However this requirement has recently changed and we need to fix this issue. We realise that fundamentally our application is sending e-mail incorrectly, as per this post: Send email on behalf of clients However, we would like to resolve the problem at the SMTP server level. We have deployed a server running Postfix. Is it possible to have Postfix automatically adjust the mail headers so that we get this "Sent on behalf of" behaviour? I figure it should just be a case of Postfix noticing that the FROM address is the spoofed (i.e. a domain that is not mentioned in its config anywhere) and therefore inject/replace the appropriate headers to get the desired effect. Thanks.

    Read the article

  • Want to send my neighbors to a certain website via DNS, but don't have a clue how. [closed]

    - by Akku
    My neighbors have an unsecured WIFI router, and over the administration web-UI of the router I could log in as there was no password set. I don't know which of my neighbors these are, and I'd like to configure their router in a way that they come to my website instead of Google and Facebook, where I set up a warning in german. It this page: http://www.abelssoft.de/liebenachbarn/ Basically, I just want to see if and how this is possible - I'm aware that I could just set the WiFi-password and have them call their network provider to reset the thing, but I really want to see if this could work, because it would be a way cooler effect :-). So this router interface doesn't allow custom redirects, only filters. BUT I can set the DNS that is used, so I thought there might be the possibility to set up a custom DNS on a server, set it as the main DNS and redirect from Google to the URL above. Is this possible? If so, please try to detail a way that I have to go though to achive this. Note that I'm not the super-Linux-skilled person, I have a dyndns account and a Windows machine it points to as well as an Apache+Tomcat if that helps. I could also set up virtual machines on the windows server and redirect to those using a different port. Or is there maybe a webservice that provides such custom DNS?

    Read the article

  • Small maximum number of connections on a Linux router

    - by Eugene
    I have a Linux box acting as a router with no iptables or other firewall and no networking applications running on it, just pure router. I've put it in a test environment that generates many TCP connections, each having unique source and destination IP, and those connections go through this router. I'm observing that number of connections successfully created rise to approximately 500 and then no more connections can be created for several minutes, then another 100 connections can be created and there is another pause, and so on. If 10 connections for each source-destination pair are created, then maximum numbers go about 10 times up, so the problem is probably with many connections from different IPs. As traffic is simply routed, it doesn't have to do with number of file descriptors, iptables connection tracking and other things often proposed to check in similar cases. The box has plenty of free RAM and CPU, both NICs are gigabit. The kernel is 2.6.32. I've already tried increasing net.core.*mem_max, net.core.netdev_max_backlog and txqueuelen on both NICs, with completely no effect. What else should I check ? Is there some rate-limit in the kernel itself ?

    Read the article

  • nVidia performance with newer X and newer driver abysmal with Compiz

    - by Nakedible
    I recently upgraded Debian to Xorg 2.9.4 and installed nvidia-glx from experimental, version 260.19.21. This was somewhat of an uphill battle as the dependencies for the experimental nvidia-glx package are still somewhat broken. I got it to work without forcing the installation of any packages and without modifying the packages. However, after the upgrade compiz performance has been abysmal. I am using the desktop wall plugin and switching viewports is really slow - takes a few seconds for each switch. In addition to this, every effect that compiz does, such as zoom animations for icons when launching applications, takes seconds. The viewport switching speed changes relative to the amount of windows on that virtual screen - empty screens switch almost at normal speed, single browser windows work almost decently, but just 4 rxvt terminals slows the switches down to a crawl. My compiz configuration should be pretty basic. Xorg is likewise configured without anything special - the only "custom" configuration is forcing the driver name to be "nvidia". I've fiddled around with the nvidia-settings and compizconfig trying different VSync settings, but none of those helped. My graphics card is: NVIDIA GPU NVS 3100M (GT218) at PCI:1:0:0 (GPU-0). This is laptop GPU that is from the Geforce GTX 200 series. Graphics card performance should naturally be no problem. EDIT: In the end, nothing really worked, and I got really annoyed with the state of compiz and its support in Debian. Many nVidia driver revisions have passed and I am using Gnome 3 now, so I am accepting the best answers to this question even though the issue was not resolved.

    Read the article

  • Conditionally set an Apache environment variable

    - by Tom McCarthy
    I would like to conditionally set the value of an Apache2 environment variable and assign a default value if one of the conditions is not met. This example if a simplification of what I'm trying to do but, in effect, if the subdomain portion of the host name is hr, finance or marketing I want to set an environment var named REQUEST_TYPE to 2, 3 or 4 respectively. Otherwise it should be 1. I tried the following configuration in httpd.conf: <VirtualHost *:80> ServerName foo.com ServerAlias *.foo.com DocumentRoot /var/www/html SetEnv REQUEST_TYPE 1 SetEnvIfNoCase Host ^hr\. REQUEST_TYPE=2 SetEnvIfNoCase Host ^finance\. REQUEST_TYPE=3 SetEnvIfNoCase Host ^marketing\. REQUEST_TYPE=4 </VirtualHost> However, the variable is always assigned a value of 1. The only way I have so far been able get it to work is to replace: SetEnv REQUEST_TYPE 1 with a regular expression containing a negative lookahead: SetEnvIfNoCase Host ^(?!hr.|finance.|marketing.) REQUEST_TYPE=1 Is there a better way to assign the default value of 1? As I add more subdomain conditions the regular expression could get ugly. Also, if I want to allow another request attribute to affect the REQUEST_TYPE (e.g. if Remote_Addr = 192.168.1.[100-150] then REQUEST_TYPE = 5) then my current method of assigning a default value (i.e. using the regular expression with a negative lookahead) probaby won't work.

    Read the article

  • Nginx Multiple If Statements Cause Memory Usage to Jump

    - by Justin Kulesza
    We need to block a large number of requests by IP address with nginx. The requests are proxied by a CDN, and so we cannot block with the actual client IP address (it would be the IP address of the CDN, not the actual client). So, we have $http_x_forwarded_for which contains the IP which we need to block for a given request. Similarly, we cannot use IP tables, as blocking the IP address of the proxied client will have no effect. We need to use nginx to block the requested based on the value of $http_x_forwarded_for. Initially, we tried multiple, simple if statements: http://pastie.org/5110910 However, this caused our nginx memory usage to jump considerably. We went from somewhere around a 40MB resident size to over a 200MB resident size. If we changed things up, and created one large regex that matched the necessary IP addresses, memory usage was fairly normal: http://pastie.org/5110923 Keep in mind that we're trying to block many more than 3 or 4 IP addresses... more like 50 to 100, which may be included in several (20+) nginx server configuration blocks. Thoughts? Suggestions? I'm interested both in why memory usage would spike so greatly using multiple if blocks, and also if there are any better ways to achieve our goal.

    Read the article

  • Shortcuts located in "D:\Program Data\..." not working even though they're pointing to the right targets (Windows 7)

    - by Kevin
    I just made a fresh install of my windows 7 home premium using my laptop's recovery disks (HP Pavilion dv6-2151cl) using minimal settings. After install, I set up "Program Data" and "Users" to my D partition to save space changing the folders in the registry. Then I updated windows (including W7 SP1), and installed all other programs. After installing all other programs I noticed that the icons of all new programs (not included in the windows install) in "All Programs" had a blank sheet as icon and they don't do anything. Looked into "D:\Program Data\Microsoft\Windows\Start Menu\Programs" in the windows explorer and the same is true there. All the shortcuts in C: and "D:\Users..." work both in the "Windows Explorer" and "All Programs". Also I noticed that the shortcuts do display the right icons inside the "open" dialog boxes. And if I copy the shortcuts in "D:\Program Data..." to the desktop they also work as expected. I checked file association for .lnk and it was OK, but also tried the registry fixers for this file association and they had no effect. There are no missing programs that I can tell in the "All Programs" menu, the just don't do anything if they lay in "D:\Program Data...". Any thoughts on how to make Windows 7 treat shortcuts in "D:\Program Data..." as they should?

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem mapping G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd. [insert ref link here] We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • Recover badly recorded DVDs

    - by CesarGon
    A few years ago (2003-2005) I bought a Sony USB external DVD recorder for my Dell laptop and I used it to burn a lot of discs. Much later, when I tried to use one of these discs, I realised that I could not read it. The disc behaved as if it was scratched or dirty. I tried on a couple of different DVD drives but got the same effect. Sadly, all the discs that I burnt with that recorder suffer from the same problem. Edit. When I read one of these discs with ImgBurn, I get lots of unrecovered read errors in multiple sectors, even at 1x speed. The sectors that cause read errors seem to be quite random; it's not always the same one. I have no idea what could be wrong with the discs. I doubt that they are scratched or dirty; it would be too much of a coincidence that all the discs that I burnt with that recorder got damaged at the same time. Also, they don't show any physical defects. Is there any way to diagnose what the problem is and, hopefully, recover the contents of the discs? Many thanks.

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Truecrypt files corrupted after moving PC into another case

    - by Dygerati
    I recently bought a new PC case and transferred all of my PC hardware into it. The only hardware modification was the addition of two identical ram modules. The entire process went smoothly, and everything worked and booted as before. The only side-effect I found when accessing one my of file-based hidden truecrypt volumes shortly there after. Some of the files in the volume - NOT all - seemed to be entirely corrupted. The directory and file names are garbled characters, but a few of the directories in the same volume appear and function normally. Also, all files in the non-hidden tc volume were still intact. Is this not weird? The only other real change I could think of would be that the hard drives were connected to different SATA ports on the mobo. I really don't know how the truecrypt encryption works well enough to know what could cause this...and the fact that not all the files were corrupted makes it more bizarre still. So, first off (and I'm not too hopeful on this point), would it be possible to restore these files? I had a backup of most, but not all of the files involved. Other than that I'm just curious how this happened and how I can prevent it next time. Thanks!

    Read the article

  • How to change password on RAR archive w/o modifying arch. files attributes (modified/created)?

    - by Larry78
    How do I change the password of an .RAR archive, without changing the date/time attributes of the files in the archive? Unfortunately you can't directly change the password of the archive with WinRAR, you have to extract the files, and then make a new archive with the new password. So the created/modified attributes of the files in the archive get changed. I know you can manually change the attributes of a file with available utilities - but there are hundreds of files in the archive, each with unique attributes, so it would take a very long time to "fix" each file before re-archiving it. I'm using WinRAR 3.51, the last free version. Windows XP Pro SP3. Update: I don't care if the output is a .RAR file or a ZIP file IZArc4.1 will convert the RAR to a ZIP, and it keeps the dates. The problem is it compresses the file - there isn't a "store" option, and setting the default to store in the main configuration doesn't effect conversions. The RAR contains uncompressed files. None of these other archiving programs will even do a conversion. A couple claim to, or try to, but the errors returned indicate a very lousy application. So far I've tried PeaZip, 7-Zip, FilZip, TugZip, SimplyZipSE, QuickZip, and WinShrink (from downloads.cnet.com). WinRAR gives the error "skipping encryped archive" when I try the conversion. It asks for the password first, and I know it's right, as I opened the archive, and I can read/view all the files in it. It works on non-encrypted files.

    Read the article

  • How to secure a group of Amazon EC2 instances

    - by ks78
    I have several Amazon EC2 instances running Ubuntu 10.04 and I've recently started using Amazon's Route 53 as my DNS. The purpose of doing that was to allow the instances to refer to each other by name rather than private IP (which can change). I've pointed my domain name (via GoDaddy) to Amazon's name servers, allowing me to access my EC2 webservers. However, I noticed I can now access the EC2 instances which I don't want to be public, such as the dedicated MySQL Server. I was thinking Amazon's Security Groups would still be in effect when using Route 53, but that doesn't seem to be the case. Before I started using Route 53, I was thinking of having one instance run a reverse proxy, which would help protect the web servers behind it. Then IP-restrict all the other instances. I know IP restricting can be done using the firewall within each instance, but should I ever need to access them from another IP address, I'd need a way in. Amazon's control panel made it a breeze to open a port when necessary. Does anyone have any suggestions for keeping EC2 instances secure, but also accessible to their administrator? Also, what's the best topology for a group of EC2 instances, consisting of web servers and a dedicated database server, from a security perspective? Does having a reverse proxy server even make sense?

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Without an internet connection my computer loads programs slowly. With an internet connection they load quickly

    - by Peter pete
    Gday. I've a laptop, with 8gb of RAM and a Samsung 830 SSD with about 2/5th of free space out of its 256gb. Win7 64b. Laptop is a Toshiba T130. In the other day I noticed that it took a long time to load a program, for example the python interpreter at the time the computer didn't have access to the home network. In both cases the computer boots quickly. About 40 seconds. Without internet opening the python interpreter, or notepad2.exe, or any program individually takes around 10 seconds. With internet connection (through WiFi) opening the same programs takes about 2 seconds. In both cases, it becomes SSD-instant-quick to open a program from a cold boot. I don't have any network mapped drives. I've tried with AV off (Avast antivirus) and same effect. I've installed all MS updates. I've googled and googled and have found nothing of help. I've run the Samsung SSD magician to "optimise" ssd and that didn't help. What could I do to determine & fix this issue?

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >