Search Results

Search found 27244 results on 1090 pages for 'old computer'.

Page 256/1090 | < Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >

  • linux display drivers

    - by salman
    I've run into a major display problem on newly installed fedora 11, on my 6 years old pc which runs a pentium4 2.4 GHz processor, 1 gb ddr ram, intel 845 motherboard with integrated graphics card. When i open an image or play a video, my complete screen turns garbled. I simply cannot make out whats on my screen. With difficulty i have to close the image/video window and move around the folder window to clean the screen image. Is it because of my display drivers? How can i fix it? I also ran into mp3 plugins and flash issues which i was able to resovle. I'm new to linux, the sole purpose of isntalling it on my old pc was to learn linux but this display problem is frustrating me. Thanks, Salman

    Read the article

  • Why are my hard drives failing?

    - by WishCow
    I have a small Ubuntu server running at home, with 2 HDDs. There are two software raids (raid1) on the disks, managed by mdadm, which I believe is irrelevant, but mentioning it anyway. Both of the HDDs are Western Digital, and have been used for around 2 years, when one of them started making clicking noises, and died. I figured that maybe it's natural after 2 years, so I bought a new one, and resynced the raid arrays. After about a month, the other drive also died. I didn't get suspicious, since both drives have been bought at the same time, it's not that surprising to see both of them near each other, so I bought another one. So far, 2 old drives failed, and 2 brand new in the system. After one month, one of the new drives died. This is when it started getting suspicious. Since the PC was put together from some really old parts (think AthlonXP), I figured that maybe the motherboard's SATA controller is the culprit. Of course you can't switch parts easily in an old PC like this, so I bought a whole system, new MB, new CPU, new RAM. Took the just failed drive back, since it was under warranty, and got it replaced. So it is up to 2 failed drives from the old ones, and 1 failed drive from the new ones. No problems, for 1 month. After that errors were creeping up again in /var/log/messages, and mdadm was reporting raid array failures. I started tearing my hair out. Everything is new in the system, it's up to the third brand new HDD, it's simply not possible that all of the new drives that I bought were faulty. Let's see what is still common... the cables. Okay, long shot, let's replace the SATA cables. Take HDD back, smile to the guy at the counter and say that I'm really unlucky. He replaces the HDD. I come home, one month passes and one of HDDs fails, again. I'm not joking. Two of the brand new HDDs have failed. Maybe it's a bug in the OS. Let's see what the manufacturer's testing tool says. Download testing tool, burn it to a CD, reboot, leave HDD testing overnight. Test says that the drive is faulty, and I should back up everything, if I still can. I don't know what's happening, but it does not look like a software problem, something is definitely thrashing the HDDs. I should mention now, that the whole system is in a shoebox. Since there are a load of "build your own ikea case" stuff, I thought there shouldn't be any problems throwing the thing in a box, and stuffing it away somewhere. The box is well ventilated, but I thought that just maybe the drives were overheating. There is no other possible answer to this. So I took the HDD back, and got it replaced (for the 3rd time), and bought HDD coolers. And just now, I have heard the sound of doom. click click whizzzzzzzzz. SSH into the box: You have new mail! mail r 1 DegradedArrayEvent on /dev/md0 ... dmesg output: [47128.000051] ata3: lost interrupt (Status 0x50) [47128.000097] end_request: I/O error, dev sda, sector 58588863 [47128.000134] md: super_written gets error=-5, uptodate=0 [48043.976054] ata3: lost interrupt (Status 0x50) [48043.976086] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [48043.976132] ata3.00: cmd c8/00:18:bf:40:52/00:00:00:00:00/e1 tag 0 dma 12288 in [48043.976135] res 40/00:00:00:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) [48043.976208] ata3.00: status: { DRDY } [48043.976241] ata3: soft resetting link [48044.148446] ata3.00: configured for UDMA/133 [48044.148457] ata3.00: device reported invalid CHS sector 0 [48044.148477] ata3: EH complete Recap: No possibility of overheating 6 drives have failed, 4 of those have been brand new. I'm not sure now that the original two have been faulty, or suffered the same thing that the new ones. There is nothing common in the system, apart from the OS which is Ubuntu Karmic now (started with Jaunty). New MB, new CPU, new RAM, new SATA cables. No, the little holes on the HDD are not covered I'm crying. Really. I don't have the face to return to the store now, it's not possible for 4 drives to fail under 4 months. A few ideas that I have been thinking: Is it possible that I fuck up something when I partition and resync the drives? Can it be so bad that it physicaly wrecks the drive? (since the vendor supplied tool says that the drive is damaged) I do the partitoning with fdisk, and use the same block size for the raid1 partitions (I check the exact block sizes with fdisk -lu) Is it possible that the linux kernel or mdadm, or something is not compatible with this exact brand of HDDs, and thrashes them? Is it possible that it may be the shoebox? Try placing it somewhere else? It's under a shelf now, so humidity is not a problem either. Is it possible that a normal PC case will solve my problem (I'm going to shoot myself then)? I will get a picture tomorrow. Am I just simply cursed? Any help or speculation is greatly appreciated. Edit: The power strip is guarded against overvoltage. Edit2: I have moved inbetween these 4 months, so the possibility of the cause being "dirty" electricity in both places, is very low. Edit3: I have checked the voltages in the BIOS (couldn't borrow a multimeter), and they are all seem correct, the biggest discrepancy is in the 12V, because it's supplying 11.3. Should I be worried about that? Edit4: I put my desktop PC's PSU into the server. The BIOS reported much more accurate voltage readings, and also it has successfully rebuilt the raid1 array, which took some 3-4 hours, so I feel a little positive now. Will get a new PSU tomorrow to test with that. Also, attaching the picture about the box: (disregard the 3rd drive)

    Read the article

  • Accessing Virtual Host from outside LAN

    - by Ray
    I'm setting up a web development platform that makes things as easy as possible to write and test all code on my local machine, and sync this with my web server. I setup several virtual hosts so that I can access my projects by typing in "project" instead of "localhost/project" as the URL. I also want to set this up so that I can access my projects from any network. I signed up for a DYNDNS URL that points to my computer's IP address. This worked great from anywhere before I setup the virtual hosts. Now when I try to access my projects by typing in my DYNDNS URL, I get the 403 Forbidden Error message, "You don't have permission to access / on this server." To setup my virtual hosts, I edited two files - hosts in the system32/drivers/etc folder, and httpd-vhosts.conf in the Apache folder of my WAMP installation. In the hosts file, I simply added the server name to associate with 127.0.0.1. I added the following to the http-vhosts.conf file: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www/ladybug" ServerName ladybug ErrorLog "logs/your_own-error.log" CustomLog "logs/your_own-access.log" common </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot "c:/wamp/www" ServerName localhost ErrorLog "logs/localhost-error.log" CustomLog "logs/localhost-access.log" common </VirtualHost> Any idea why I can't access my projects from typing in my DYNDNS URL? Also, is it possible to setup virtual hosts so that when I type in http://projects from a random computer outside of my network, I access url.dyndns.info/projects (a.k.a. my WAMP projects on my home computer)? Help is much appreciated, thanks!

    Read the article

  • How can I update fontconfig to a newer version in Red Hat 5.3?

    - by yan bellavance
    I want to update fontconfig to a newer version but it seems that the OS is still finding the old fontconfig and I need the newer version to build qt. How do I make Red Hat 5.3 see the newer version? I do not know if this helps but when I did a search for fontconfig I found some files in a folder called cache. When I do yum update it tells me everything is up to date but that version is too old and is missing FcFreeTypeQueryFace. Just send me a comment if this is wrong site and ill change it.

    Read the article

  • Setting up Mercurial/TortoiseHg to work with UltraCompare

    - by Tim Pietzcker
    Hi, I'm trying to get my favorite Windows diff/merge tool, UltraCompare (V7.00) to work with Mercurial/TortoiseHg. I have set up UltraCompare in my Mercurial.ini like this (only relevant bits shown): [merge-tools] UltraCompare.executable = C:\Programme\IDM Computer Solutions\UltraCompare\uc.com UltraCompare.args = $base $local $other UltraCompare.priority = 1 UltraCompare.gui = True UltraCompare.binary = True UltraCompare.checkconflicts = True UltraCompare.checkchanged = True However, the three-way-merge fails. The path names get messed up if the path to the repository that is being merged to contains a space. I have done some more testing, and I've found out (using Process Explorer) that uc.com is called with a broken command line if there is a space in the repository's path: Compare "C:\Programme\IDM Computer Solutions\UltraCompare\uc.exe" " "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.akr6au" "E:\Eigene Dateien\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.b92442" and "C:\Programme\IDM Computer Solutions\UltraCompare\uc.com" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.e7vryp" "E:\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.u_qxme" There is an extraneous " after the path of the executable in the first example - not in the second (which works fine). To me, it seems as if UltraCompare is doing everything right, and that Mercurial/TortoiseHg are passing a defective command line to it. Would you say so, too? Is there a workaround? I've just updated to Mercurial 1.5/TortoiseHg 1.0, and the problem persists. Support for other merge tools (Beyond Compare and others) has been added, sadly not UltraCompare...

    Read the article

  • IE 8 Caching Problem

    - by Jeff Catania
    One of my javascript sources had an extra comma that was throwing an error in IE8. So I opened up my editor, deleted the comma, and saved. I reloaded IE8, but it was still pulling the old js file. I deleted everything in "Delete Browsing History...", and restarted the browser. It is still pulling the old file. I even set up a log on my server to show whenever the js file was requested. When reloading with IE, the js file is never requested. I tried doing the same process in Chrome and FF, and it pulled the new file and logged properly on the server. Is there some other cache that I am failing to clear in IE that would cause this problem?

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • IIS7 Modules - managed or native?

    - by Simon Linder
    Hi all, as the old ISAPI filters are going to die sooner or later, I want to rewrite an old ISAPI filter that was used in IIS 6 into a module for use in IIS 7. The module will be used globally, meaning it will be used within each site, on a Windows Server 2008 R2 with IIS 7.5 installed, that will host several thousand web sites and managing about 50 application pools. My question now is if I should write that module in managed or unmanaged code? One of my concerns regarding managed code is the massive memory consumption due to the .NET framework overhead. I don't know how this would effect the server's performance. I already wrote modules in managed as well as in unmanaged code. So this is not the bothering my decision. But I would prefer to write the module in C# if there are no huge drawbacks. Any suggestions about that issue?

    Read the article

  • Converting from SQL Server 2000 to 2005 for ASP.NET web App

    - by Bazza Formez
    Hi there, I'm moving my ASP.NET website to a new provider. Only problem is, old host support my SQL Server 2000 db. New host only supports SQL Server 2005. How should I go about the conversion ? Can I simply produce a backup of the 2000 (.bak) file at the old host, and restore that file into SQL Server 2005 at the new host ? Or is there more to it ?? Note that I don't own a copy of SQL Server 2005 at home... and I'm trying to avoid having to do so. Thanks, Bazza

    Read the article

  • Hibernate sequence should only generate when ID is <=0

    - by Tim Leys
    Hi all, I'm using the folowing sequence in my code. (I got the sequence and @Id @SequenceGenerator(name = "S912_PRO_SEQ", sequenceName = "S912_PRO_SEQ", allocationSize = 1) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "S912_PRO_SEQ") @Column(name = "PRO_ID", unique = true, nullable = false, precision = 9, scale = 0) public int getId() { return this.id; } And using the following sequence / triggers in my DB. CREATE SEQUENCE S912_PRO_SEQ nomaxvalue minvalue 20; CREATE OR REPLACE TRIGGER S912_PRO_B_I_TRG BEFORE INSERT ON S912_project REFERENCING OLD AS OLD NEW AS NEW FOR EACH ROW ENABLE begin IF :NEW.pro_ID IS NULL THEN select S912_PRO_SEQ.nextval into :new.pro_ID from dual; END IF; end; I was wondering if there is a way to let hibernate generate a sequence ONLY if the ID is <=0 (not set) or if the ID already exists. I know for most cases my trigger would fix the situation. But I do not want to rely completely on it. I hope someone can help me out :p

    Read the article

  • Merging two folders using git

    - by vrish88
    I'm working on a project with some people who have never used git before. Not knowing the capabilities of git, they created two version of the project: development and production. These two versions are both present in the current environment. To complicate things further, this other user created these folders in addition to the old development folder. So the project directory looks like this /root /proj (old dev folder with my own code in it) /dev_proj (new folder which I would like to merge /prod with) /prod_proj (production code) So what I'd like to do is merge the work that I've done in /proj with the work in the /dev_proj. Is there a way to do this with git? I've thought about creating a branch, copying all the files from /proj to /dev_proj and merging that branch with master. Would this work? Thanks and if I could clarify something let me know.

    Read the article

  • Routing classic asp and through an MVC application

    - by Matthias
    We are starting to convert a large classic asp application into MVC (using C#). An additional requirement is, that all classic routes get "translated" to MVC ones ('mydomain.com/productdetail.asp?id=13' should become 'mydomain.com/products/13') even before we start writing the first controller or view. So basically, we want to use the routing from MVC but have the classic asp handle the response. An these are my questions: How to use the new nice urls but have the classic asp handle the construction of the html result? Within the classic asp page, the new MVC url pattern should be used for links. What is the best way of translating the old urls to the new ones and make the accessible within the classic asp site (using COM I guess). When an old/classic url is requested, how would I correctly handle that request so that browsers/searchengines would understand that the page has moved to the new url? Thanks in advance!

    Read the article

  • Boost 1.4.0, "assert" identifier not found

    - by Adam Haile
    I'm trying to compile an old project that was originally written for linux on windows. It uses boost 1.4.0, and whenever I compile it throws error C3961: "assert" : identifier not found. I'm using Visual Studio 208 SP1 When I drill down into assert.hpp it includes this: # include <assert.h> // .h to support old libraries w/o <cassert> - effect is the same # define BOOST_ASSERT(expr) assert(expr) BOOST_ASSERT is actually what's failing, and VS doesn't seem to recognize assert() even though assert.h is obviously included. As far as I can tell, all the fails are in files that are part of boost, not my own code, but it throws about 1200 of them. Any ideas how to fix this?

    Read the article

  • Django admin interface upload failing on request data read error

    - by Jake
    Hi All, This is an updated version of an old question I asked. I've now done a lot more testing, plus the old question got hijacked. I'm getting a request data read error when trying to upload files to the Django admin interface. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) IOError: request data read error I've tried Chrome and Firefox on Windows and Firefox on Mac - Same results. I can upload to other sites so I don't think it's my connection. I'm running python 2.4, django 1.1, mod_wsgi, on CentOS (a media temple DV server) Locally it's fine (Django development server) Everything I've found on this issue says it's a mod_python issue and that changing to mod_wsgi will fix it, but I am running mod_wsgi. Can anyone help?

    Read the article

  • What are the things I use every day programmed with?

    - by sub
    It isn't so interesting to find out what this text editor here or that IRC client there was programmed with, also it isn't really hard and neither are there really suprising things to come out. Wow so it was programmed in Python, I didn't expect that. What I'm asking is: What are the things that we daily see, use or generally need programmed with? To name a few (really only a few of those out there): My alarm clock It has many features so it would probably be hard programming it with assembler or whatever, so did they probably use a programming language? If yes, which? My electrical tooth brush The (stupid) board computer of my car. (6 years old, has few features but a red LED display showing me how cold/warm it is outside and how much gas I'm using up per hour at the moment) Those (old) plastic mini-mini computers with the LCD(?) displays that only had one game available on them: PacMan, tetris or so. I'm not directly thinking of this but it may be similar: Other, probably more interesting, things I didn't mention

    Read the article

  • pasteHTML removes markup

    - by ullmark
    I am writing a plugin to an old IE-only WYSIWYG-editor which resides in an old CMS. I've created a plugin that opens an popup where the user kan enter the url of an youtube clip. The popup then creates the corrent <object..><param..> markup for the embed and uses Internet Explorers pasteHTML function; var range = plugin.editorDocument.selection.createRange(); var embedHtml = OpenDialog(dialogUrl, null, 400, 200); if (!embedHtml) { return; } range.pasteHTML(embedHtml); I know it's missing a bit of information about some of the variables but you get the picture. The problem is that the <param>-tags gets removed when i run the pasteHTML. I wonder if anyone have an idea of fixing this, and letting me keep my param-tags

    Read the article

  • WCF does not generate the properties

    - by BDotA
    I have a .NET 1.1 ASMX and want to use it in a client WinForms app. If i go wit the old way and add it as a "WebRefrence" method then I will have access to two of its properties which are "url" and "UseDefaultCredentials" and it works fine. But if I go with the new WCF way and add it as a ServiceReference I still have access to the methods of that ASMX but those two properties are missing. what is the reason for that? so for example in the old way ( adding WebReference) these codes are valid: TransferService transferService= new TransferService(); transferService.Url = "http://something.asmx"; transferService.Credentials = System.Net.CredentialCache.DefaultCredentials; string[] machines = transferService.GetMachines(); But in the new way ( adding Service Reference ) using(TransferServiceSoapClient transferServiceSoapClient = new TransferServiceSoapClient("TransferServiceSoap")) { transferServiceSoapClient.Url = "someUrl.asmx"; //Cannot resolve URL transferServiceSoapClient.GetMachines(new GetMachinesRequest()); transferServiceSoapClient.Credentials = .... // //Cannot resolve Credentials }

    Read the article

  • change email address format with minimal disruption

    - by femi
    Hello, all the email addresses in my organization are in the format [email protected]. this was started when we were a small organization. Now we have grown and need to use something a bit more professional like [email protected] how can this change be implemented with minimal disruption? We currently only use smarteremail. Could recieving ONLY with the old and replying with the new be a solution..till we wean our recipients off the old email address? Any suggestions are welcome. How will moving to exchange help in this instance? Can it be configured to automatically send out using a different address? Thanks

    Read the article

  • Visual Studio Unit Tests : dll is not trusted

    - by Ian
    I'm struggling getting some unit tests running and wondering if anyone might have anything insightful. The setup is that we've got a bunch of referenced DLL's on a server and when I try and execute I get the old Test Run deployment issue: The location of the file or directory 'c:\source\ProjectName\bin\debug\3rdPartyLibrary.dll' is not trusted. I've tried the old caspol command: caspol -m -ag 1.2 -url file:\server\binaries* FullTrust Which seems to work for everything bar one DLL. I'm currently having to manually change the permissions everytime I do a build of the test project, which is a pain. Anyone have any suggestions? Running a Win7 64bit OS btw.

    Read the article

  • Why is Read-Modify-Write necessary for registers on embedded systems?

    - by Adam Shiemke
    I was reading http://embeddedgurus.com/embedded-bridge/2010/03/different-bit-types-in-different-registers/, which said: With read/write bits, firmware sets and clears bits when needed. It typically first reads the register, modifies the desired bit, then writes the modified value back out and I have run into that consrtuct while maintaining some production code coded by old salt embedded guys here. I don't understand why this is necessary. When I want to set/clear a bit, I always just or/nand with a bitmask. To my mind, this solves any threadsafe problems, since I assume setting (either by assignment or oring with a mask) a register only takes one cycle. On the other hand, if you first read the register, then modify, then write, an interrupt happening between the read and write may result in writing an old value to the register. So why read-modify-write? Is it still necessary?

    Read the article

  • Unregistering a COM wrapped .NET assembly

    - by flopdix
    I created a COM exposed .NET 2.0 dll and registered with my windows operating system using 'regasm' and 'gacutil' (in GAC). Then I tried to call this component from my classic ASP page. It worked fine. But, I need to change the functionality of my assembly. I unregistered it again using 'regasm' and 'gacutil' (from GAC) utilities and copied my new .NET DLL and registered again (This time using a new version of the DLL). For some reason, i still have a pointer to my old assembly and new calls to the DLL from ASP page are not working. Any Ideas on what process i need to follow to ensure that all the references to the old version are completely removed? I appreciate any help.

    Read the article

  • Deploy maven generated site on Google Code svn?

    - by xamde
    Using a google code svn as a basic maven repository is easy. However, using mvn site:deploy efficiently on google code seems hard. So far, I found only these solutions: * Deploy to a local file:/// and use a PERL script to delete the old and copy the new * Use wagen-svn to deploy. This is very slow (hours!) and does not delete old files * Plus all mime-types are wrong I am looking for a solution that allows new developers in my projects to check out the current source and just use it, without requiring to install PERL or learn weird steps to perform or wait hours.

    Read the article

  • How do I fix this NameError?

    - by Kyle Kaitan
    I want to use the value v inside of an instance method on the metaclass of a particular object: v = ParserMap[kind][:validation] # We want to use this value later. s = ParserMap[kind][:specs] const_set(name, lambda { p = Parser.new(&s) # This line starts a new scope... class << p define_method :validate do |opts| v.call(self, opts) # => NameError! The `class` keyword above # has started a new scope and we lost # old `v`. end end p }) Unfortunately, the class keyword starts a new scope, so I lose the old scope and I get a NameError. How do I fix this?

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • In Delphi 7, why can I assign a value to a const?

    - by Blorgbeard
    I copied some Delphi code from one project to another, and found that it doesn't compile in the new project, though it did in the old one. The code looks something like this: procedure TForm1.CalculateGP(..) const Price : money = 0; begin ... Price := 1.0; ... end; So in the new project, Delphi complains that "left side cannot be assigned to" - understandable! But this code compiles in the old project. So my question is, why? Is there a compiler switch to allow consts to be reassigned? How does that even work? I thought consts were replaced by their values at compile time?

    Read the article

< Previous Page | 252 253 254 255 256 257 258 259 260 261 262 263  | Next Page >