Search Results

Search found 20592 results on 824 pages for 'anything'.

Page 287/824 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • How to crash a program

    - by user2949019
    I have a program called BlueCoat Proxy installed on my school issued laptop that basically blocks every second website on the Internet, including stack exchange, YouTube and yahoo answers. I do not have administrator rights, nor can I delete anything in program files, I tried every possible method of obtaining admin rights. It is not accessible in task manager (it doesn't even appear there). I tried to close it with Windows command prompt through commands like 'taskkill' but it returns 'Access is Denied' (I'm only denied access with that program). Does anyone know a method of crashing a program with a batch file or VB program? I was thinking something like the ping command, though for a program. Maybe automating 1000 meaningless requests to the program? Your input on the subject matter is appreciated, however telling me that this is wrong or illegal is not.

    Read the article

  • Custom resolution - Windows 7 - no drivers

    - by Vytautas Butkus
    I'm having some problems with my VGA card. When I enable my VGA drivers and boot windows up they crash and I get message that the problem occured with atimdag.sys drivers or smth like that. I'm gettin BSOD all the time, unless I go to control panel and I disable drivers manually. I can live with disabled drivers just fine(I'm already used to it in quite some time) but the annoying problem is my display resolution. I can not set my res to anything higher than 1024x768. I was wondering if it's possible to change resolution to my native displays resolution without drivers? I've been trying to find something that would work w/o drivers, but with no luck(I've found many of solutions how to do it with working drivers, but as you know I can not enable my drivers). So please, maybe someone could help me and tell me how to set custom resolution?

    Read the article

  • Hudson.. another Continuous Integration tool

    - by Narendra Tiwari
    In my previous posts I discussed about Cruisecontrol.net and its legacy support to .Net development. Hudson  is yet another continuous integration tool. Hudson is also free like CCNet and built in java. - CCNet has its legacy support to .Net applications where as Hudson can be easily configured on both the environments (.Net and Java). - One of the major differences in CCNet and Hudson is the richer GUI of Hudson provide user interactive screens for project configuration where as in CCNet we have to play with a few xml configuration files. Both the tools are capable of providing basic features of continuous integration e.g.:- - Source Control configuration - Code Compilation/Build - Ad hoc plugin tools to be configured along with compilation Support for adhoc tools seems to be bigger with CCNet e.g. There are almost every source control plugin available with CCNet where as Hudson has support for limited source control servers. Basically there is an interseting point to see is that there are 2 major partsof whole CI system one performed by build tool and rest. Build tool takes care of all adhoc plugin tools  so no matter if CI tool does not have plugin for that tool if thet tools provides command line support that can be configured in build tool and that build tool is then configured with CI tool inturn. For example if I have a build script configured in MSBuild and CCNet can be easily switched to Hudson. Here we need not to change anything in build script we just need to configure MSBuild on Hudson and pass the path of script file and thats it... all is same. Hudson Resources:- - https://hudson.dev.java.net/ - http://wiki.hudson-ci.org/display/HUDSON/Meet+Hudson - http://wiki.hudson-ci.org/display/HUDSON/Plugins - http://callport.blogspot.com/2009/02/hudson-for-net-projects.html Java support on CCNet http://confluence.public.thoughtworks.org/display/CC/Getting+Started+With+CruiseControl?focusedCommentId=19988484#comment-19988484 Please share your thoughts...

    Read the article

  • How can I tell if a KB or newer has been installed for Windows?

    - by IguyKing
    I have a Windows system that I need to audit. The requirements is that (for example) KB2160329 has been installed onto the system. I know from lots of digging that KB2731847 that we have installed in the environment superseded the earlier KB. MSkbfiles.com works if you know the file name such as TCPIP.SYS. Doesn't do anything if you are just looking for KB Hotfixes. How can I say feed in a script that I'm looking for KB2160329 and it can check for superseded patches? Or is there a website somewhere that I'm missing? [Edited 7 May 2014 8:54am CST] I'm looking for a way to say that KB2731847 which is on the system does fix the same issue (plus more fixes) as KB2160329 which is not in the list as being installed on the system.

    Read the article

  • YouTube videos not working

    - by André
    I have a problem with watching YouTube videos. It says: "An error occurred, please try again later". I've tried loading different videos and that's what it says to all the videos I try to watch. I've tried using another browser, clearing cache + cookies etc, but none of that really worked out. My operating system is Windows 7 Home Premium, I use Google Chrome as my browser. And the YouTube videos was able to be watched earlier. I suspect that it has something to do with the PC, since I've got YouTube working on my laptop earlier. Not sure if it still works on my laptop though. Hope I've given enough information for you to help me out with this problem. Feel free to ask if there's anything else you need to know. Hope you can/will help me out. :)

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Samsung printer (ML-1640) margin problem

    - by Rytis
    I just got a new laser printer, Samsung ML-1640, and it has once again confirmed my experience that Samsung printers suck badly. The printer prints fine, except that the top margin is set at random, usually very small, cutting text on the top of page off. I tried updating firmware, getting new drivers, but nothing helped. I tried changing paper types, and it seemed that with the paper set as "plain" it was OK, but I just tried printing again after letting the printer rest for the night, and wasted 6 pages until I managed to print 2 boarding passes... Is there anything else I could try to resolve the issue?

    Read the article

  • tar incremental backup is backing everything up, every time when used on the Dropbox directory

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

  • solr administration

    - by devrick0
    Does anyone have any notes for an sysadmin supporting solr? I'm looking for anything that might be useful for monitoring & metrics as well as troubleshooting. Some useful links I have found are: /solr/admin/stats.jsp and /solr/admin/analysis.jsp In the logs I have noticed, other than the query, "hits", "status" and "QTime" values. The documentation on what these mean is sparse at least based on the 100+ websites I have checked. QTime appears to be the query time response in milliseconds. Hits is some form of results but I'm not sure exactly what makes that up and I'm not sure about status. Typically I see status come back as "0" but I have seen other numbers such as "5", so my thoughts that it could be either HTTP status codes or a 0 or 1 (good or bad) methodology isn't accurate. All of the documentation I have come across is intended for developers. Any sysadmin-centric documentation would be a big help.

    Read the article

  • Couldn't make Angry birds to work on wine

    - by Ashfame
    I could run Notepad++ easily but I fail to run the Angry bird exe. Whenever I open the exe, I see one of my screen flickrs a bit (as lines and not the whole screen) and nothing happens. Any ideas? Edit: Output of wine angrybirds.exe fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) err:module:import_dll Library MSVCP90.dll (which is needed by L"C:\\windows\\system32\\AppUpWrapper.dll") not found err:module:import_dll Library AppUpWrapper.dll (which is needed by L"C:\\windows\\system32\\angrybirds.exe") not found err:module:LdrInitializeThunk Main exe initialization for L"C:\\windows\\system32\\angrybirds.exe" failed, status c0000135 I think it didn't even install. I manually dropped those files in the folder but still no gain. Edit: Progress I dropped the file MSVCP90.dll manually and now this is what I get in the output fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC80.CRT" (8.0.50727.4053) fixme:actctx:parse_depend_manifests Could not find dependent assembly L"Microsoft.VC90.CRT" (9.0.21022.8) fixme:heap:HeapSetInformation 0x541000 0 0x32fd48 4 fixme:heap:HeapSetInformation (nil) 1 (nil) 0 EXCEPTION: Failed to open data/scripts/starLimits.lua wine: Unhandled exception 0x40000015 at address 0x7b880023:0x78b271d0 (thread 0009), starting debugger... fixme:msvcr90:__clean_type_info_names_internal (0x10267694) stub fixme:msvcr90:__clean_type_info_names_internal (0x78506644) stub ashfame@ashfame-desktop:~$ Process of pid=0008 has terminated No process loaded, cannot execute 'echo Modules:' Cannot get info on module while no process is loaded No process loaded, cannot execute 'echo Threads:' process tid prio (all id:s are in hex) 0000000e services.exe 00000014 0 00000010 0 0000000f 0 00000011 winedevice.exe 00000018 0 00000016 0 00000013 0 00000012 0 00000019 explorer.exe 0000001a 0 You must be attached to a process to run this command. No process loaded, cannot execute 'detach' and there the terminal hangs (I mean I would have to Ctrl + C to get out). It shows up the famous message, that it needs to close down. Edit: Just to let you know that I am still stuck at it. I don't use wine for anything else, so I am ready to do a clean install of wine and everything if anyone is willing to provide me instructions.

    Read the article

  • What is polarity for an unmarked USB hub?

    - by Feral Oink
    On my USB hub where the power supply plugs in, the polarity is unmarked (I got it used, from my brother's friend). I need to know what the polarity should be. I don't think it makes a difference about the brand. Could someone who has a USB hub tell me what the polarity markings are? This is why I need to know about the polarity: Usually the center pin is positive and the outer pin is negative. But I'm not sure with this, and want to be certain before I plug anything into it.

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • Tips on combining the right Art Assets with a 2D Skeleton and making it flexible

    - by DevilWithin
    I am on my first attempt to build a skeletal animation system for 2D side-scrollers, so I don't really have experience of what may appear in later stages. So i ask advice from anyone that been there and done that! My approach: I built a Tree structure, the root node is like the center-of-mass of the skeleton, allowing to apply global transformations to the skeleton. Then, i make the hierarchy of the bones, so when moving a leg, the foot also moves. (I also make a Joint, which connects two bones, for utility). I load animations to it from a simple key frame lerp, so it does smooth movement. I map the animation hierarchy to the skeleton, or a part of it, to see if the structure is alike, otherwise the animation doesnt start. I think this is pretty much a standard implementation for such a thing, even if i want to convert it to a Rag Doll on the fly.. Now to my question: Imagine a game like prototype, there is a skeleton animation of the main character, which can animate all meshes in the game that are rigged the same way.. That way the character can transform into anything without any extra code. That is pretty much what i want to do for a side-scroller, in theory it sounds easy, but I want to know if that will work well. If the different people will be decently animated when using the same skeleton-animation pair. I can see this working well with a Stickman, but what about actual humans? Will the perspective look right, or i will need to dynamically change the sprites attached to bones? What do you recommend for such a system?

    Read the article

  • VirtualBox - split partitioned VDI into separate VDIs

    - by mathematical.coffee
    I'm very new to VirtualBox. I set up an Arch Linux VM and a Ubuntu VM (Ubuntu host), both sharing the same .vdi like so (I had in my mind a dual-boot situation): VDI file (25GB) |- /dev/sda1: 5GB (Arch Linux) |- /dev/sda2: [Ubuntu] |- /dev/sda5 (swap, 1GB) |- /dev/sda6 Ubuntu /, 9GB |- /dev/sda7 Ubuntu /home, 10GB I've now realised that I don't want a dual-boot-type setup, I'd rather boot each machine independently (my initial thought was to share /home between Ubunto and Arch). So, my question: Can I split /dev/sda1 and /dev/sda2 each to their own .vdi files so I can use them as completely separate machines? I'd rather not have to re-install either Arch (because it took me ages to work it out!) or Ubuntu (because I've already done a few GB of updates and don't want to redo them). I haven't been able to find anything about this - most questions I see are about converting a .vdi to a partition on the host, or splitting a .vdi into multiple smaller files (that are not independent), or converting a partition on the host to a .vdi file. cheers.

    Read the article

  • Migrating Roaming Profiles from one drive to another

    - by Jared
    As the title suggests, how can I migrate roaming profiles located in one drive (starting to fill up already) to another? Current share is like this "SVR1\Shares\UserProfiles\%username%\ But of course, this is located in C:/Shares/UserProfiles/%username%/ What do I need to do? Do I simply copy/paste into the bigger(RAID1) drive and then repoint all the profile paths (using AD Users&Computers profile properties)? What if I can point this to a different file server all together? Best practices? tips? anything you guys can suggest. Thanks!

    Read the article

  • Cron stopped working, partially working.

    - by Robi
    Our cron script stopped working in different dates in August. What can be the possible reasons? We did not change anything. Our hosting showed us a log where we can see that cron is executing our scripts. But, nothing is happening in our scripts. If we manually execute the scripts, we're getting correct results like before. I showed the commands to hosting and they showed me that the commands are working. What should I tell my hosting? what should I do? They are php scripts which are executed by CRON and they just post to facebook and twitter. They don't execute any hard or huge things. I even asked my hosting if we broke any rules.

    Read the article

  • Find which php scripts cause high CPU with php-cgi

    - by Oli
    Background: I maintain a server for a client who has half a dozen Wordpress sites on. They all have the W3 Total Cache plugin installed and eAcellerator is installed (might be APC). All the PHP sites run through a single batch of fastcgi php-cgi processes (it's actually php-fpm but I'm not sure if that makes a difference). Problem: php-cgi's CPU usage is quite high. Not terminally high but high enough to raise an eyebrow. The client wants to add more sites in the future and I want to avoid becoming CPU limited if I can help it. Question: Is there any way I can find the scripts or even just requests that are causing the high CPU. I realise I might not be able to do anything with the results but it would give me a chance.

    Read the article

  • Ubuntu 13.10 Installing MariaDB

    - by Ecaz
    I have tried everything to install MariaDB on this clean Ubuntu installation but I keep getting this error, Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-5.5 (= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I have followed this guide to try and install it, http://www.unixmen.com/install-lemp-server-nginx-mysql-mariadb-php-ubuntu-13-10-server/ And I have also followed the "official" guide on the MariaDB downloads page for 13.10 https://downloads.mariadb.org/mariadb/repositories/ But nothing seems to be working. Edit 1 I have tried both How do I resolve unmet dependencies? and How to install MariaDB? but it still gives me the error I posted above. It's a fresh Ubuntu install with hardly anything installed. Edit 2 All the check boxes are ticket in Updates. I ran: sudo apt-get update && sudo apt-get -f install mariadb-server-5.5"=5.5.33a+maria-1~saucy" And it gave me this error: The following packages have unmet dependencies: mariadb-server-5.5 : Depends: mariadb-client-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed Depends: mariadb-server-core-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • ls returns nothing only in certain directories

    - by Jakobud
    I have a raid drive mounted here: /data/ And certain directories like this one: /data/somedir/somesubdir/ when I run ls w/ or w/o any flags, terminal doesn't return anything. It does not return an empty directory listing. It simply goes to the next line and sits there blank with no prompt coming up. I cannot CTRL-C out of it. I have to close this terminal instance and start over. At first I thought it was something to do with the ls command, but its pointing to /bin/ls and I can ls other directories just fine. Also, running this find /data/somedir/somesubdir immediately finds all the files just as expected.

    Read the article

  • ibm 8305-29u Sound comes from internal speaker instead of green port - Ubuntu 10.04 LTS

    - by TecBrat
    The title pretty much says it all. My experience w/ubuntu is VERY limited. I did find the system test feature and ran an audio test. It found my device as Intel 82801DB-ICH4. I wasn't getting anything. I tried a couple of things and don't really even remember what I tried, but I started getting audio (not just beeps) from the system's internal speaker. Any ideas? (I can't rule out a hardware problem. This box sat under my bed for a couple of years and it's first life was in an environment FULL of cement dust.)H

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Blogging After The Blog Boom

    - by Tim Murphy
    I have been blogging on Geeks With Blogs since 2005 and on other blogging sites before that.  In this age of Twitter, Facebook and G+ it feels like we are in the post-blog age and yet here I continue.  There are several reasons for this.  The first is that I still find it to be the best place for self publishing long form thought that won’t fit well on Twitter or Facebook.  Google+ allows for this type of content, but it suffers from the same scroll factor as the other social media platforms.  If you aren’t looking at the right moment you miss it.  On a blog I can put complete thoughts with examples and people can find what they want via key words or search engine. The second reason I blog is to have a place for me to put information I want to be able to reference back to later.  Although I use OneNote which is now accessible everywhere the blog gives me somewhere to refer co-workers and clients when I have solutions for problems I have previously solved. I know that other people use their blog as a resume builder, but that hasn’t been one of my primary concerns.  Don’t get me wrong.  Opportunities do come up because you put out well thought out, topical material.  That just isn’t one of my top motivators. I don’t always find the time to blog or even have anything to say lately, but I will continue to produce content for myself and others to learn from and hopefully enjoy. del.icio.us Tags: Blogging,Social Media

    Read the article

  • Cookies blocked by router?

    - by Martin wiboe
    Hello, My friend has a D-Link DI-524 router that she uses for her home broadband. It's a pretty vanilla setup with the standard firewall settings, DHCP enabled etc. However, recently she has experienced something strange - cookies are not working on every computer on her LAN, whether using FF3.5 or IE8. I tried viewing the HTTP traffic using Fiddler2, and the requests come through fine (mind you, Internet browsing still works flawlessly) but whenever a website tries to set a cookie using the "Set-Cookie:" header, my computer sees that line as "Set-*ookie:" with the cookie contents removed. I have never seen anything like this - do you have any idea? Regards, Martin

    Read the article

  • Ping isn't acting accurate?

    - by Earlz
    I've been trying to diagnose some latency issues with my internet connection. I've been lagging out of online video games and such, which of course could be their server's fault. So, I've been running ping some. It doesn't indicate anything unusual, but it does act a bit strange. I can start it with something like ping internethost -i 0.1 so that it will send a ton of packets, and every 10-20 seconds it will appear to just freeze for 2 or 3 seconds. The packets are still being received in the right order though, and there is no packet loss. The weirdest thing is that after the little freeze up, it will usually just report a ping time that is about 10-30ms higher than the average. How does this happen? Is ping still being accurate? I'm using Arch Linux. The host I'm pinging is my website, which shouldn't be doing any kind of ping slowing or filtering.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >