Search Results

Search found 612 results on 25 pages for 'corruption'.

Page 5/25 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Problem using delete[] (Heap corruption) when implementing operator+= (C++)

    - by Darel
    I've been trying to figure this out for hours now, and I'm at my wit's end. I would surely appreciate it if someone could tell me when I'm doing wrong. I have written a simple class to emulate basic functionality of strings. The class's members include a character pointer data (which points to a dynamically created char array) and an integer strSize (which holds the length of the string, sans terminator.) Since I'm using new and delete, I've implemented the copy constructor and destructor. My problem occurs when I try to implement the operator+=. The LHS object builds the new string correctly - I can even print it using cout - but the problem comes when I try to deallocate the data pointer in the destructor: I get a "Heap Corruption Detected after normal block" at the memory address pointed to by the data array the destructor is trying to deallocate. Here's my complete class and test program: #include <iostream> using namespace std; // Class to emulate string class Str { public: // Default constructor Str(): data(0), strSize(0) { } // Constructor from string literal Str(const char* cp) { data = new char[strlen(cp) + 1]; char *p = data; const char* q = cp; while (*q) *p++ = *q++; *p = '\0'; strSize = strlen(cp); } Str& operator+=(const Str& rhs) { // create new dynamic memory to hold concatenated string char* str = new char[strSize + rhs.strSize + 1]; char* p = str; // new data char* i = data; // old data const char* q = rhs.data; // data to append // append old string to new string in new dynamic memory while (*p++ = *i++) ; p--; while (*p++ = *q++) ; *p = '\0'; // assign new values to data and strSize delete[] data; data = str; strSize += rhs.strSize; return *this; } // Copy constructor Str(const Str& s) { data = new char[s.strSize + 1]; char *p = data; char *q = s.data; while (*q) *p++ = *q++; *p = '\0'; strSize = s.strSize; } // destructor ~Str() { delete[] data; } const char& operator[](int i) const { return data[i]; } int size() const { return strSize; } private: char *data; int strSize; }; ostream& operator<<(ostream& os, const Str& s) { for (int i = 0; i != s.size(); ++i) os << s[i]; return os; } // Test constructor, copy constructor, and += operator int main() { Str s = "hello"; // destructor for s works ok Str x = s; // destructor for x works ok s += "world!"; // destructor for s gives error cout << s << endl; cout << x << endl; return 0; }

    Read the article

  • Flash player displays incorrect colours

    - by neurolysis
    This is not a webapps issue, so please don't move it there. This is an issue with Flash, not with one specific website, as shown in this screenshot: This issue appears to come and go with time, it often also manifests as 'stuttering' playback with audio corruption and strange artifacts on the frames. I tried to uninstall/reinstall Flash a while back when it first started to do this, and that worked briefly, then it came back. Now reinstalling it doesn't fix the problem, even temporarily. Any ideas on the cause and/or a solution?

    Read the article

  • Does a bad Internet connection increase bandwidth usage?

    - by Synetech
    My (Rogers) cable connection has been pretty bad recently (channels 3 and 10 are particularly fuzzy—it’s analog, not digital cable). Not surprisingly, this has caused my cable modem to drop out and have to reestablish a connection a couple of times since it started. The poor connection of course means higher corruption (not necessarily dropped per se) which causes the TCP/IP stack to have to retransmit packets more often. Reduction of bandwidth throughput aside, I got to wondering if it increases the actual bandwidth usage. That is, if there is a high error rate on the line causing packets to have to be retransmitted: Does this increase a bandwidth monitoring program’s numbers? Does the ISP count the retransmitted packets toward the monthly cap? Based on what I remember from my university networking courses and common sense, I have a feeling that the answer to both questions is yes, but I cannot reliably measure the first, and have no authoritative answer for the second. I’m wondering if maybe the retransmitted packets are acknowledged as being duplicates and thus not counted somewhere along the line.

    Read the article

  • Backup / Disaster Recovery, should I store RAR-compressed files?

    - by moraleida
    I'm in the process of recovering files from an accidentally formated Ext4 partition using Photorec. It had about 300Gb of data, of which I've already got hold of about 30Gb. So far, it seems to me that the recovery of RAR-compressed files has been much more successful than the recovery of individual uncompressed files and ZIP compressed files - in the sense that a lot of recovered files/zips were unreadable, and pretty much all of the RAR files were intact. Is there such a relation? Are RAR-compressed files really less prone to corruption and thus easier to recover?

    Read the article

  • Why would an ext3 filsystem be rolled back on a Debian VM running in VirtualBox after loss of power to the host

    - by Sevas
    A Debian Virtual machine runs as a Guest VirtualBox VM. It's filesystem is EXT3. The host system loses power and after booting up the host system and guest VM, I find that the VM's filesystem has been rolled back to a previous state, losing changes made to the filesystem some time before losing power. The operations that were rolled back had been fully completed before the loss of power (files fully copied, file handles closed, etc.), but it's possible and even likely that other write operations were occuring on the VM at the point of the crash. So I am trying to figure out if it's the filesystem recovery process that rolls back filesystem operations after encountering corruption post power loss, or is it possibly related to VirtualBox and the way it ignores flush requests for performance gains by default (discussed here) Are there any other factors that would result in the filesystem being rolled back after losing power?

    Read the article

  • JNI String Corruption

    - by Chris Dennett
    Hi everyone, I'm getting weird string corruption across JNI calls which is causing problems on the the Java side. Every so often, I'll get a corrupted string in the passed array, which sometimes has existing parts of the original non-corrupted string. The C++ code is supposed to set the first index of the array to the address, it's a nasty hack to get around method call limitations. Additionally, the application is multi-threaded. remoteaddress[0]: 10.1.1.2:49153 remoteaddress[0]: 10.1.4.2:49153 remoteaddress[0]: 10.1.6.2:49153 remoteaddress[0]: 10.1.2.2:49153 remoteaddress[0]: 10.1.9.2:49153 remoteaddress[0]: {garbage here} java.lang.NullPointerException at kokuks.KKSAddress.<init>(KKSAddress.java:139) at kokuks.KKSAddress.createAddress(KKSAddress.java:48) at kokuks.KKSSocket._recvFrom(KKSSocket.java:963) at kokuks.scheduler.RecvOperation$1.execute(RecvOperation.java:144) at kokuks.scheduler.RecvOperation$1.execute(RecvOperation.java:1) at kokuks.KKSEvent.run(KKSEvent.java:58) at kokuks.KokuKS.handleJNIEventExpiry(KokuKS.java:872) at kokuks.KokuKS.handleJNIEventExpiry_fjni(KokuKS.java:880) at kokuks.KokuKS.runSimulator_jni(Native Method) at kokuks.KokuKS$1.run(KokuKS.java:773) at java.lang.Thread.run(Thread.java:717) remoteaddress[0]: 10.1.7.2:49153 The null pointer exception comes from trying to use the corrupt string. In C++, the address prints to standard out normally, but doing this reduces the rate of errors, from what I can see. The C++ code (if it helps): /* * Class: kokuks_KKSSocket * Method: recvFrom_jni * Signature: (Ljava/lang/String;[Ljava/lang/String;Ljava/nio/ByteBuffer;IIJ)I */ JNIEXPORT jint JNICALL Java_kokuks_KKSSocket_recvFrom_1jni (JNIEnv *env, jobject obj, jstring sockpath, jobjectArray addrarr, jobject buf, jint position, jint limit, jlong flags) { if (addrarr && env->GetArrayLength(addrarr) > 0) { env->SetObjectArrayElement(addrarr, 0, NULL); } jboolean iscopy; const char* cstr = env->GetStringUTFChars(sockpath, &iscopy); std::string spath = std::string(cstr); env->ReleaseStringUTFChars(sockpath, cstr); // release me! if (KKS_DEBUG) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << std::endl; } ns3::Ptr<ns3::Socket> socket = ns3::Names::Find<ns3::Socket>(spath); if (!socket) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " socket not found for path!!" << std::endl; return -1; // not found } if (!addrarr) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " array to set sender is null" << std::endl; return -1; } jsize arrsize = env->GetArrayLength(addrarr); if (arrsize < 1) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " array too small to set sender!" << std::endl; return -1; } uint8_t* bufaddr = (uint8_t*)env->GetDirectBufferAddress(buf); long bufcap = env->GetDirectBufferCapacity(buf); uint8_t* realbufaddr = bufaddr + position; uint32_t remaining = limit - position; if (KKS_DEBUG) { std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " bufaddr: " << bufaddr << ", cap: " << bufcap << std::endl; } ns3::Address aaddr; uint32_t mflags = flags; int ret = socket->RecvFrom(realbufaddr, remaining, mflags, aaddr); if (ret > 0) { if (KKS_DEBUG) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " addr: " << aaddr << std::endl; ns3::InetSocketAddress insa = ns3::InetSocketAddress::ConvertFrom(aaddr); std::stringstream ss; insa.GetIpv4().Print(ss); ss << ":" << insa.GetPort() << std::ends; if (KKS_DEBUG) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " addr: " << ss.str() << std::endl; jsize index = 0; const char *cstr = ss.str().c_str(); jstring jaddr = env->NewStringUTF(cstr); if (jaddr == NULL) std::cout << "[kks-c~" << spath << "] " << __PRETTY_FUNCTION__ << " jaddr is null!!" << std::endl; //jaddr = (jstring)env->NewGlobalRef(jaddr); env->SetObjectArrayElement(addrarr, index, jaddr); //if (env->ExceptionOccurred()) { // env->ExceptionDescribe(); //} } jint jret = ret; return jret; } The Java code (if it helps): /** * Pass an array of size 1 into remote address, and this will be set with * the sender of the packet (hax). This emulates C++ references. * * @param remoteaddress * @param buf * @param flags * @return */ public int _recvFrom(final KKSAddress remoteaddress[], ByteBuffer buf, long flags) { if (!kks.isCurrentlyThreadSafe()) throw new RuntimeException( "Not currently thread safe for ns-3 functions!" ); //lock.lock(); try { if (!buf.isDirect()) return -6; // not direct!! final String[] remoteAddrStr = new String[1]; int ret = 0; ret = recvFrom_jni( path.toPortableString(), remoteAddrStr, buf, buf.position(), buf.limit(), flags ); if (ret > 0) { System.out.println("remoteaddress[0]: " + remoteAddrStr[0]); remoteaddress[0] = KKSAddress.createAddress(remoteAddrStr[0]); buf.position(buf.position() + ret); } return ret; } finally { errNo = _getErrNo(); //lock.unlock(); } } public int recvFrom(KKSAddress[] fromaddress, final ByteBuffer bytes, long flags, long timeoutMS) { if (KokuKS.DEBUG_MODE) printMessage("public synchronized int recvFrom(KKSAddress[] fromaddress, final ByteBuffer bytes, long flags, long timeoutMS)"); if (kks.isCurrentlyThreadSafe()) { return _recvFrom(fromaddress, bytes, flags); // avoid event } fromaddress[0] = null; RecvOperation ro = new RecvOperation( kks, this, flags, true, bytes, timeoutMS ); ro.start(); fromaddress[0] = ro.getFrom(); return ro.getRetCode(); }

    Read the article

  • Hard drives (SATA/ATA) corrupting

    - by JC Denton
    Hello All, 2 years ago I bought relatives a new computer for christmas and installed Ubuntu on it. Ever since then it has been experiencing problems with the hard drives. The hard drive supplied with the machine was a SATA drive. When it appeared it was having problems (files and folders with invalid encoding started appearing) I replaced the SATA drive with the drive of their previous computer. I replaced it (The replacement) later on, the drive being rather old and thus becoming more prone to the risk of failure. The replacement drive is a IDE drive but the same problems started to appear (files and folders showing up in nautilus with invalid encoding). I fear the files and folders that are showing are existing FS entries, starting to corrupt. As it's happening to both the IDE and SATA drive it's unlikely to be the drives themselves or the IDE/SATA controller, I believe. Any ideas as to what could be causing the (assumed) corruption? EDIT: You're right about the paragraphs. They were there in edit mode but I'm still getting to grips with the whitespace format codes. The system is a "Primo Pro" AMD Phenom II X4 Quad Core 920 2.80GHz SILENT DDR2, ordered from overclockers.co.uk and nothing has been added to it except for the replacement of the SATA drive with an AMD drive. It would seem unlikely for a barebones system to be underpowered.

    Read the article

  • What are "Excess Fragments" in defragmenting a hard drive?

    - by Andrew Swift
    I'm defragmenting my hard drive (XP SP3) with PerfectDisk 7.0, and it finds 816,659 excess fragments when I ask for an analysis. [update] Specifically, it shows that the 1TB disk is 14% fragmented with 19693 fragments and 816,659 excess fragments. About 20% of the disk is still free space. What does excess fragments refer to? What is the difference between fragments and excess fragments? I have had problems in the past where I defragmented a fragmented disk and many files were corrupted. It seemed as though "excess fragments" referred to orphan pieces, where the program couldn't find out where to put them. If that was true, then defragmenting a disk resulted in many incomplete files, and in fact I defragmented a disk full of MP3's and got a lot of corrupted files as a result. Instead, I started to simply format a separate disk and copy everything from one to the other. That way there were no orphan bits, and no file corruption. Does anybody know what "excess fragments" really are?

    Read the article

  • Files on ext4 on Drobo with corrupt, zero-ed out blocks

    - by Patrick
    I have a 2TB ext4 file system (Ubuntu running Linux kernel 2.6.31-22-server x86_64). This file system is the second drive on a Drobo box plugged in via USB. We've not had problems on the first drive (Drobo limits drive size to 2TB due to some OS limitations, so if you have more space than that it appears as two separate drives). I am sharing this files with Samba (smbd 3.4.0) with a mix of Windows and Linux workstations. Recently we've been experiencing some data corruption in multiple files. In many cases I have an un-corrupt original file stored on one of the workstations. These are binary files of various formats, (e.g. SQLite, but others as well). I used "split" to split a corrupt and uncorrupt file into 4096 byte chunks (this is the block size of the ext4 file system). I then ran md5sum on pairs of chunks and discovered that the chunks matched in many cases and in every case where they did not match, the corrupt chunk was a solid chunk of zeroes (620f0b67a91f7f74151bc5be745b7110 for what it's worth). I'm trying to track down a culprit but am a bit at a loss. I don't believe Samba is at fault since I'm using it without issue on the first drive exported by the Drobo. What can I do to narrow this down and find out what's going on?

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a ne

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. I am at a loss on how to troubleshoot this.

    Read the article

  • logfile deleted on Oracle database how to re-create it?

    - by Daniel
    for my database assignment we were looking into 'database corruption' and I was asked to delete the second redo log file which I have done with the command: rm log02a.rdo this was in the $HOME/ORADATA/u03 directory. Now I started up my database using startup pfile=$PFILE nomount then I mounted it using the command alter database mount; now when I try to open it alter database open; it gives me the error: ORA-03113: end-of-file on communication channel Process ID: 22125 Session ID: 25 Serial number: 1 I am assuming this is because the second redo log file is missing. There is still log01a.rdo, but not the one I have deleted. How can I go about recovering this now so that I can open my database again? I have looked into the database created scripts, and it specified the log02a.rdo file to be size 10M and part of group 2. If I do select group#, member from v$logfile; I get: 1 /oradata/student_db/user06/ORADATA/u03/log01a.rdo 2 /oradata/student_db/user06/ORADATA/u03/log02a.rdo 3 /oradata/student_db/user06/ORADATA/u03/log03a.rdo 4 /oradata/student_db/user06/ORADATA/u03/log04a.rdo So it is part of group 2. If I try to add the log02a.rdo file again "already part of the database". If I drop group 2 and then add it again with these commands: ALTER DATABASE ADD LOGFILE GROUP 2 ('$HOME/ORADATA/u03/log02a.rdo') SIZE 10M; Nothing. Supposedly alters the database, but it still won't start up. Any ideas what I can do to re-create this and be able to open my database again?

    Read the article

  • disk write cache buffer and separate power supply

    - by HugoRune
    Windows has a setting to turn off the write-cache buffer (see image) Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. Is it feasible and economical to get such a "separate power supply" for the internal sata drives of a non-server PC? Under what name is such a power supply sold? I know that there are UPS devices that can be connected to external drives,but what is required to be able to switch this setting safely on for an internal disk? The setting has different descriptions in different version of windows Windows XP: Enable write caching on the disk This setting enables write caching in Windows to improve disk performance, but a power outage or equipment failure might result in data loss or corruption. Windows Server 2003: Enable write caching on the disk Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows Vista: Enable advanced performance Recommended only for disks with a backup power supply. This setting further improves disk performance, but it also increases the risk of data loss if the disk loses power. Windows 7 and 8: Turn off Windows write-cache buffer flushing on the device To prevent data loss, do not select this check box unless the device has a separate power supply that allows the device to flush its buffer in case of power failure. This article by Raymond Chen has some more detailed information about what the setting does.

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • Computer is dying--what should I be looking for?

    - by Will
    Okay, I'm a bit knowledgeable with pooters and such, but i'm confused. My computer is dying slowly, and I'm not sure what part is causing this. Computer details: Vista, dell machine, intel Q6600, 2.4 Core Duo (quad core), standard memory and drive (unknown manufacturer). Symptoms: I would best describe the symptoms as memory corruption. After a couple days on, I start getting applications crashing or failing to open for a lack of "resources". Sounds are corrupted. Onscreen text gets corrupted; the characters of text are garbled, not the pixels on the screen. Video memory seems untouched as I haven't seen any misplaced pixels. Recently I've lost files on disk. I've also experienced errors reporting a supposed lack of disk space, even though I have fifty gigs free. There was one point where I couldn't get to the POST when booting up. After I cleaned everything (see next) this hasn't happened. Diagnostic steps: First thing I did was clean the case. There was a lot of dust buildup on heatsinks, so I cleaned all that up. No help. Next, I disconnected and reconnected everything, from power cables to memory (did not reseat cpu). No change. Last, I ran the standard vista memory diagnostics and ran checkdisk. Both reported no errors found. I have not run any POST tests, now that I think about it. I'm at a loss at this point. Disk appears fine, memory too. I'd expect motherboard issues to result in the thing not booting up, yet it does every time. What should I be looking at? What more can I do?

    Read the article

  • Tracking EXC_BAD_ACCESS on iPad

    - by Aleks
    I've been using this code to create my UIWindow UIMyWindow* win = [[UIMyWindow alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame]]; UIMyWindow isn't anything special it just has a pointer to a C++ class that does some wrapping of ObjectiveC. Recently my application start crashing after adding some line of code that doesn't have to do anything with the error. The line of code that I added is just allocating a C++ object but the program execution never reaches this line. Interesting enough my code works in Release. My only guess is that I made some memory corruption on a completely different place. My questions are: What type of memory corruption that can be? And is there some good practices to track them down?

    Read the article

  • Symfony/Propel NestedSet left/right ID corruption/adjustment

    - by Mike Crowe
    Hi folks, I have a nested set application that seems to be getting corrupted. Here's what I'm seeing: We're using nested sets for a binary tree (any node can have 2 children). It appears to be working fine, but some event causes a discrepancy. For instance, when I do a getNumberOfDescendants() for the root node, it will slowly increase as this event happens. However, displaying the tree works fine, as does inserting (apparently). Has anybody seen anything like this before? For instance, my repair program shows this as the repairs that it makes: User pxxxxx left 0=>0, right 145=>129 User axxxxx left 1=>1, right 124=>106 User mxxxxx left 119=>117, right 120=>118 User fxxxxx left 125=>107, right 144=>128 User fxxxxx left 126=>108, right 131=>113 User rxxxxx left 127=>109, right 128=>110 User mxxxxx left 129=>111, right 130=>112 User mxxxxx left 132=>114, right 143=>127 User cxxxxx left 133=>115, right 142=>126 User gxxxxx left 134=>116, right 137=>121 User mxxxxx left 135=>119, right 136=>120 User jxxxxx left 138=>122, right 141=>125 User axxxxx left 139=>123, right 140=>124 I thought at first it was when I deleted a user, but it has since occurred w/o that event. Anybody know of a cause that might generate this? I've tested ad nauseum on my local machine, but I can't duplicate it. I do have an issue where my production box is PHP 5.2.0, whereas my test device is 5.2.10. Could that be an issue? TIA Mike

    Read the article

  • Strange problem with NSMutableArray - Possibly some memory corruption

    - by user210504
    Hi! I am trying to update data in a table view using a NSMutableArray. Quite simple :( What is happening is that I get my data from a NSURLConnection Callback, which I parse and store it in an array and call reload data on the table view. The problem is that when cellForRowAtIndexPath is called back by the framework. The array still shows the correct count of the elements but all string objects I had stored earlier are shown as invalid. Any pointers

    Read the article

  • Download and write .tar.gz files without corruption.

    - by arbales
    I've tried numerous ways of downloading files, specifically .zip and .tar.gz, with Ruby and write them to the disk. I've found that the file appears to be the same as the reference (in size), but the archives refuse to extract. What I'm attempting now is: Thanks! def download_request(url, filePath:path, progressIndicator:progressBar) file = File.open(path, "w+") begin Net::HTTP.get_response URI.parse(url) do |response| if response['Location']!=nil puts 'Direct to: ' + response['Location'] return download_request(response['Location'], filePath:path, progressIndicator:progressBar) end # some stuff response.read_body do |segment| file.write(segment) # some progress stuff. end end ensure file.close end end download_request("http://github.com/jashkenas/coffee-script/tarball/master", filePath:"tarball.tar.gz", progressIndicator:nil)

    Read the article

  • SQL Compact Edition database corruption

    - by jdv
    Hi, Our product is using MS SQL Compact Edition on a Windows machine (laptop). It's basically a metadata index for files we have on the filesystem. Recently we have seen databases getting corrupted. This happens when the machine is very busy moving files around and has to do a tiny bit of database changes at the same time. I was somewhat shocked that was at all possible. It was my expectation that the database would stay coherent whatever the circumstances. Of course we are doing something wrong. Things we have checked so far are: Use of only one db connection per thread specify the maximum size when opening the database The database is accessed only by one application, a .net based windows service. Are there other gotcha's?

    Read the article

  • Copying a IO stream results in corruption.

    - by StackedCrooked
    I have a small Mongrel webserver that sends the stdout of a process to a http response: response.start(200) do |head,out| head["Content-Type"] = "text/html" status = POpen4::popen4(command) do |stdout, stderr, stdin, pid| stdin.close() FileUtils.copy_stream(stdout, out) FileUtils.copy_stream(stderr, out) puts "Sent response." end end This works well most of the time, but sometimes characters get duplicated. For example this is what I get from the "man ls" command: LS(1) User Commands LS(1) NNAAMMEE ls - list directory contents SSYYNNOOPPSSIISS llss [_O_P_T_I_O_N]... [_F_I_L_E]... DDEESSCCRRIIPPTTIIOONN List information about the FILEs (the current directory by default). Sort entries alphabetically if none of --ccffttuuvvSSUUXX nor ----ssoorrtt. Mandatory arguments to long options are mandatory for short options For some mysterious reason capital letters get duplicated. Can anyone explain what's going on?

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >