Search Results

Search found 612 results on 25 pages for 'corruption'.

Page 12/25 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Journaled filesystems and power failure

    - by Yoga
    I heard that even a journaled filesystems such as EXT3/EXT4 might corrupted during power failure, e.g. from wikipedia [1]: In the event of a system crash or power failure, such file systems are quicker to bring back online and less likely to become corrupted. Can anyone provide more detail by giving examples such that when corruption can occur corruption is avoided by journaled filesystems [1] http://en.wikipedia.org/wiki/Journaling_file_system

    Read the article

  • Windows 2003 Server Caching

    - by pablomedok
    We're experiencing almost everyday table index corruption on Windows Server 2003. We are running an old application which uses DBF/CDX tables. Everything was fine for ages, but 6 months after we've installed Advantage Database Server (which allows access to some tables to our website) we started to get index corruption problems. And we don't know whom to blame. We've tried to exclude all possible causes of this corruption. Now all users work in terminal mode - so no network problems can cause that, OpLocks also can't be a reason. We changed hardware, network cards, switches, reainstalled Server and even moved to new dedicated server. The only thing we can't exclude is ADS - because it should be working. Is that possible that local read/write caching that causes that problem? E.g. one user or process uses cached data, later another user/process changes it, and later the first user changes it again without knowing about the first change. Is it possible theoretically? Is it possible that this problem is caused by imporper file server or caching settings? Is it possible that normal users use non-cached data and ADS is using cached data? Or vice versa? Is it possible that each terminal user has its own cache? Or maybe the problem is about RAID caching somehow interfering with Windows Server caching? Or maybe there are some special settings for Windows Server for working with DBF tables that are being written simultaneously by several terminal users? Maybe there is a way to turn off caching for some certain files to check it? Sometimes we get index crash twice a day, sometimes everything is fine for 5 days in a row. Today only one user was working in the evening with the database (usually there are 30-50 users are working simultaneously on working hours). So it's almost zero load on server. , Syncronization with website is performed every 5 minutes during work hours and every 15 minutes in the evening and on weekend. We've done file access auditing and it shows that during website syncroniztions ADS server opens the table and index files for ReadEA and WriteEA though it performs only SELECT queries. ADS does UPDATE/INSERT queries but less freqently - not during regular synchronizations, but only when an order is placed by website visitor). Please help me. We are struggling with this problem for almost a year and still can't find any pattern or any clue about this problem. Here is my previous qestion about this issue on DBA: http://dba.stackexchange.com/questions/8646/foxpro-dbf-index-corruption

    Read the article

  • What mail storage should I choose for our web application; IMAP, key-valud store, rdbms, ...

    - by tvrtko
    I have to store e-mail messages for use with our application. I have "metadata" for all messages inside a relational database, but I don't feel comfortable keeping message content (gigabytes and terabytes of email data) inside a database. I'm currently using IMAP as a storage, but I have my doubts if I choose correctly. First of all there is a problem of uidvalidity and how to keep a permanent reference to message inside IMAP. Second, I'm not sure if this is the most robust solution in terms of backup/restore strategies, corruption of store, replication ... Positive side is that I can query IMAP using the headers because the data is mostly indexed. I don't know if key-value stores are a better approach (Casandra, Tokyo cabinet, redis). How they handle storing 1KB and 50MB of data. How they prevent corruption and when corruption or device failure happens how can I repair the store.

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Heap Consistency Checking on Embedded System

    - by l.thee.a
    I get a crash like this: #0 0x2c58def0 in raise () from /lib/libpthread.so.0 #1 0x2d9b8958 in abort () from /lib/libc.so.0 #2 0x2d9b7e34 in __malloc_consolidate () from /lib/libc.so.0 #3 0x2d9b6dc8 in malloc () from /lib/libc.so.0 I guess it is a heap corruption issue. uclibc does not have mcheck/mprobe. Valgrind does not seem to MIPS support and my app (which is multi-threaded) depends on hw specific drivers. Any suggestions to check the consistency of the heap and to detect corruption?

    Read the article

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • When using SQL Compact on Windows Mobile, do you store the sdf file on a storage card?

    - by Michal Drozdowicz
    Having had some Sql Compact db corruption issues in the past and gone through the article on these, I got the idea that storing the database sdf file on a storage card significantly increases the risk of data loss due to db corruption. Do you store the sdf file on a storage card? Have you had any issues caused by it? What should I pay attention to when recommending a particular brand or model of an SD card wrt the stability and security for use with SQL Compact?

    Read the article

  • Starting MySQL database server: mysqld . . . . . . . . . . . . . . failed!

    - by meder
    I restarted my VPS box ( manually/hard restart ) and ever since, mysql fails to start for whatever reason. I did a tail /var/log/syslog and I get this: Feb 20 11:49:33 kyrgyznews mysqld[11461]: ) ;InnoDB: End of page dump 575 Feb 20 11:49:33 kyrgyznews mysqld[11461]: 110220 11:49:33 InnoDB: Page checksum 1045788239, prior-to-4.0.14-form checksum 236985105 576 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: stored checksum 1178062585, prior-to-4.0.14-form stored checksum 236985105 577 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Page lsn 0 10651, low 4 bytes of lsn at page end 10651 578 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Page number (if stored to page already) 3, 579 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: space id (if created with >= MySQL-4.1.1 and stored already) 0 580 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Database page corruption on disk or a failed 581 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: file read of page 3. 582 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: You may have to recover from a backup. 583 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: It is also possible that your operating 584 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: system has corrupted its own file cache 585 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: and rebooting your computer removes the 586 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: error. 587 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: If the corrupt page is an index page 588 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: you can also try to fix the corruption 589 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: by dumping, dropping, and reimporting 590 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: the corrupt table. You can use CHECK 591 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: TABLE to scan your table for corruption. 592 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: See also InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html 593 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: about forcing recovery. 594 Feb 20 11:49:33 kyrgyznews mysqld[11461]: InnoDB: Ending processing because of a corrupt database page. 595 Feb 20 11:49:33 kyrgyznews mysqld_safe[11469]: ended 596 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in 597 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: ^G/usr/bin/mysqladmin: connect to server at 'localhost' failed 598 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' 599 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! 600 Feb 20 11:49:47 kyrgyznews /etc/init.d/mysql[12228]: 601 Feb 20 11:49:56 kyrgyznews mysqld_safe[13437]: started 602 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: The log sequence number in ibdata files does not match 603 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: the log sequence number in the ib_logfiles! 604 Feb 20 11:49:56 kyrgyznews mysqld[13440]: 110220 11:49:56 InnoDB: Database was not shut down normally! 605 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Starting crash recovery. 606 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Reading tablespace information from the .ibd files... 607 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Restoring possible half-written data pages from the doublewrite 608 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: buffer... 609 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: Database page corruption on disk or a failed 610 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: file read of page 3. 611 Feb 20 11:49:56 kyrgyznews mysqld[13440]: InnoDB: You may have to recover from a backup. I have looked at the page it referenced, http://dev.mysql.com/doc/refman/5.0/en/forcing-innodb-recovery.html, but before messing with any settings I was wondering what experienced DBAs would suggest doing? Is there any harm in forcing the recovery? PS - I did not make any updates to mysql. Version is mysql Ver 14.12 Distrib 5.0.51a, for debian-linux-gnu (i486) using readline 5.2.

    Read the article

  • What Counts For a DBA – Decisions

    - by Louis Davidson
    It’s Friday afternoon, and the lead DBA, a very talented guy, is getting ready to head out for two well-earned weeks of vacation, with his family, when this error message pops up in his inbox: Msg 211, Level 23, State 51, Line 1. Possible schema corruption. Run DBCC CHECKCATALOG. His heart sinks. It’s ten…no eight…minutes till it’s time to walk out the door. He glances around at his coworkers, competent to handle many problems, but probably not up to the challenge of fixing possible database corruption. What does he do? After a few agonizing moments of indecision, he clicks shut his laptop. He’ll just wait and see. It was unlikely to come to anything; after all, it did say “possible” schema corruption, not definite. In that moment, his fate was sealed. The start of the solution to the problem (run DBCC CHECKCATALOG) had been right there in the error message. Had he done this, or at least took two of those eight minutes to delegate the task to a coworker, then he wouldn’t have ended up spending two-thirds of an idyllic vacation (for the rest of the family, at least) dealing with a problem that got consistently worse as the weekend progressed until the entire system was down. When I told this story to a friend of mine, an opera fan, he smiled and said it described the basic plotline of almost every opera or ‘Greek Tragedy’ ever written. The particular joy in opera, he told me, isn’t the warbly voiced leading ladies, or the plump middle-aged romantic leads, or even the music. No, what packs the opera houses in Italy is the drama of characters who, by the very nature of their life-experiences and emotional baggage, make all sorts of bad choices when faced with ordinary decisions, and so move inexorably to their fate. The audience is gripped by the spectacle of exotic characters doomed by their inability to see the obvious. I confess, my personal experience with opera is limited to Bugs Bunny in “What’s Opera, Doc?” (Elmer Fudd is a great example of a bad decision maker, if ever one existed), but I was struck by my friend’s analogy. If all the DBA cubicles were a stage, I think we would hear many similarly tragic tales, played out to music: “Error handling? We write our code to never experience errors, so nah…“ “Backups failed today, but it’s okay, we’ll back up tomorrow (we’ll back up tomorrow)“ And similarly, they would leave their audience gasping, not necessarily at the beauty of the music, or poetry of the lyrics, but at the inevitable, grisly fate of the protagonists. If you choose not to use proper error handling, or if you choose to skip a backup because, hey, you haven’t had a server crash in 10 years, then inevitably, in that moment you expected to be enjoying a vacation, or a football game, with your family and friends, you will instead be sitting in front of a computer screen, paying for your poor choices. Tragedies are very much part of IT. Most of a DBA’s day to day work has limited potential to wreak havoc; paperwork, timesheets, random anonymous threats to developers, routine maintenance and whatnot. However, just occasionally, you, as a DBA, will face one of those decisions that really matter, and which has the possibility to greatly affect your future and the future of your user’s data. Make those decisions count, and you’ll avoid the tragic fate of many an operatic hero or villain.

    Read the article

  • Reading excel files with xlrd

    - by snurre
    I'm having problems reading .xls files written by a Perl script which I have no control over. The files contain some formatting and line breaks within cells. filename = '/home/shared/testfile.xls' book = xlrd.open_workbook(filename) sheet = book.sheet_by_index(0) for rowIndex in xrange(1, sheet.nrows): row = sheet.row(rowIndex) This is throwing the following error: _locate_stream(Workbook): seen 0 5 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 20 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 172480= 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 172500 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 2 172520 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 173840= 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 173860 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 173880 1 1 1 1 1 1 1 1 Traceback (most recent call last): File "/home/shared/xlrdtest.py", line 5, in <module> book = xlrd.open_workbook(filename) File "/usr/local/lib/python2.7/site-packages/xlrd/__init__.py", line 443, in open_workbook ragged_rows=ragged_rows, File "/usr/local/lib/python2.7/site-packages/xlrd/book.py", line 84, in open_workbook_xls ragged_rows=ragged_rows, File "/usr/local/lib/python2.7/site-packages/xlrd/book.py", line 616, in biff2_8_load self.mem, self.base, self.stream_len = cd.locate_named_stream(qname) File "/usr/local/lib/python2.7/site-packages/xlrd/compdoc.py", line 393, in locate_named_stream d.tot_size, qname, d.DID+6) File "/usr/local/lib/python2.7/site-packages/xlrd/compdoc.py", line 421, in _locate_stream raise CompDocError("%s corruption: seen[%d] == %d" % (qname, s, self.seen[s])) xlrd.compdoc.CompDocError: Workbook corruption: seen[2] == 4 I'm not able to find any info about CompDocError or Workbook corruption, even less the seen[2] == 4 part.

    Read the article

  • Lost in Translation

    - by antony.reynolds
    Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

    Read the article

  • Upcoming: Oracle Advanced Benefits Advisor Webcasts Announced

    - by user793553
    Oracle support is pleased to announce a new webcast covering the Open Enrollment functionality in Oracle Advanced Benefits.  The webcast is repeated on three different dates, in order to make attendance easier, whatever timezone you operate in. These one-hour sessions are recommended for technical and functional users who will be having an Open Enrollment cycle in the next 12 months.  The session will review the best proactive practices recommended by Oracle Support regardless of when your Open Enrollment takes place.  It will review planning, patching, data corruption and critical checklists. TOPICS WILL INCLUDE: Planning Ahead for Open Enrollment testing Required Patches Test performance Avoid major patching/updates Data corruption issues A short, live demonstration (only if applicable) and question and answer period will be included.  Below is the schedule for the webcasts.  The same can be found in the MyOracleSupport Document Advisor Webcast Current Schedule Doc ID 740966.1 Please follow the links to register for your chosen session. Webcast Topic and Description Registration Details Date and Time Best Benefits Practices for Open Enrollment Session 3   Doc ID 1489318.1 October 17, 2012 at 16:00 US EST Best Benefits Practices for Open Enrollment Session 4   Doc ID 1489319.1 October 31, 2012 at 16:00 US EST Product Enhancements in R12.1.3 RUP 5 Session 2   Doc ID 1489320.1 November 07, 2012 at 16:00 US EST

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work? [closed]

    - by themoondothshine
    I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. An application is linked against libsome1.so. This application uses libdl.so to dynamically load another module, say libmagic.so. Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2), but nothing seems to work. What am I doing wrong?

    Read the article

  • Git Clone from SSH Repository

    - by Mike Silvis
    I used to be able to clone from my personal git repository but now i seem to be running into an error. user:dev.site.com mikesilvis$ git clone { my ssh directory } server@ipaddress's password: remote: Counting objects: 3622, done. remote: Compressing objects: 100% (2718/2718), done. error: git upload-pack: git-pack-objects died with error. fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed It seems to be working however while I push files to the repository.

    Read the article

  • systemctl (Fedora 17) and interacting spawned processes's consoles

    - by Sean
    Introduction I've recently upgraded to Fedora 17 and I'm getting used to the newer systemctl daemon manager versus shell init scripts. A feature I need on some of my daemons is the ability to interact with their consoles because unclean shutdowns not initiated by the process itself can cause database corruption. So, performing a systemctl stop service-name.service for example might cause irreversible data loss. These consoles read user input through stdin or similar methods, so what I've been doing on my old OS is to place those daemons foregrounded in a screen session, and I suspended that screen session with ^A ^z. It's also worth noting that I've now made systemctl do this automatically if the computer reboots, but it still doesn't solve my potential data corruption problem I'm trying to avoid. My Question Is there a way to use systemctl in order to directly interact with the console of processes it spawns? Can I hook a process through systemctl to get access to its console? Thanks You guys always give great answers, so I'm turning to you!

    Read the article

  • Git Clone from SSH Repository

    - by Mike Silvis
    I used to be able to clone from my personal git repository but now i seem to be running into an error. user:dev.site.com mikesilvis$ git clone { my ssh directory } server@ipaddress's password: remote: Counting objects: 3622, done. remote: Compressing objects: 100% (2718/2718), done. error: git upload-pack: git-pack-objects died with error. fatal: git upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed It seems to be working however while I push files to the repository.

    Read the article

  • Maximum executable size on the iPhone?

    - by Sirp
    Is there a maximum size of executables or executables + shared objects on the iPhone? I've been developing an app that has been crashing on startup or early in execution with SIGSYS. Removing code from the program has helped, though structuring data so the code is simply not executed does not. This could be memory corruption, of some kind, however when I compiled with -Os rather than -O2 or -O3 the size of my executable goes down from 5.15MB to 3.60MB and the application runs perfectly. I also have a bunch of libraries I use, of course. I'm wondering, is there a limit on the size of executable code on the iPhone? Or am I just 'getting lucky' and masking memory corruption when I use -Os?

    Read the article

  • Attention users running SQL Server 2008 & 2008 R2!

    - by AaronBertrand
    In April and May, Microsoft released cumulative updates for SQL Server 2008 and 2008 R2 (I blogged about them here and here ). They are: CU #11 for 2008 SP3 (10.00.5840) ( KB #2834048 ) CU #12 for 2008 R2 SP1 (10.50.2874) ( KB #2828727 ) CU #6 for 2008 R2 SP2 (10.50.4279) ( KB #2830140 ) Sometime after that, looks like the next day, both downloads were pulled, allegedly due to an index corruption issue (if you believe the commentary on the Release Services blog post for CU #6 ) or due to an issue...(read more)

    Read the article

  • SQL SERVER – SQL Server High Availability Options – Notes from the Field #032

    - by Pinal Dave
    [Notes from Pinal]: When it is about High Availability or Disaster Recovery, I often see people getting confused. There are so many options available that when the user has to select what is the most optimal solution for their organization they are often confused. Most of the people even know the salient features of various options, but when they have to figure out one single option to use they are often not sure which option to use. I like to give ask my dear friend time all these kinds of complicated questions. He has a skill to make a complex subject very simple and easy to understand. Linchpin People are database coaches and wellness experts for a data driven world. In this 26th episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words the best High Availability Option for your SQL Server.  Working with SQL Server a common challenge we are faced with is providing the maximum uptime possible.  To meet these demands we have to design a solution to provide High Availability (HA). Microsoft SQL Server depending on your edition provides you with several options.  This could be database mirroring, log shipping, failover clusters, availability groups or replication. Each possible solution comes with pro’s and con’s.  Not anyone one solution fits all scenarios so understanding which solution meets which need is important.  As with anything IT related, you need to fully understand your requirements before trying to solution the problem.  When it comes to building an HA solution, you need to understand the risk your organization needs to mitigate the most. I have found that most are concerned about hardware failure and OS failures. Other common concerns are data corruption or storage issues.  For data corruption or storage issues you can mitigate those concerns by having a second copy of the databases. That can be accomplished with database mirroring, log shipping, replication or availability groups with a secondary replica.  Failover clustering and virtualization with shared storage do not provide redundancy of the data. I recently created a chart outlining some pros and cons of each of the technologies that I posted on my blog. I like to use this chart to help illustrate how each technology provides a certain number of benefits.  Each of these solutions carries with it some level of cost and complexity.  As a database professional we should all be familiar with these technologies so we can make the best possible choice for our organization. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Shrinking Database

    Read the article

  • June 2013 Cumulative Updates for SQL Server 2008 R2

    - by AaronBertrand
    Well, surely at least partly in response to the CU6 mess I reported earlier today , and partly because they were due, Microsoft has released new cumulative updates that contain - among other things - updated code that avoids the symptom introduced with earlier updates (though this regression fix doesn't seem to appear in the KB articles - unless by "corruption" they meant ridiculous size increase). SQL Server 2008 R2 Service Pack 1 Cumulative Update # 13 KB Article: KB #2855792 5 fixes listed at...(read more)

    Read the article

  • Personal Technology – Laptop Screen Blank – No Post – No BIOS – No Boot

    - by Pinal Dave
    If your laptop Screen is Blank and there is no POST, BIOS or boot, you can follow the steps mentioned here and there are chances that it will work if there is no hardware failure inside. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Those, who care about how I come up with this not SQL related blog post, here is the very funny true story. If you are a married man, you will know what I am going to describe next. May be you have faced the same situation or at least you feel and understand my situation. My wife’s computer suddenly stops working when she was searching for my daughter’s mathematics worksheets online. While the fatal accident happened with my wife’s computer (which was my loyal computer for over 4 years before she got it), I was working in my home office, fixing a high priority issue (live order’s database was corrupted) with one of the largest eCommerce websites.  While I was working on production server where I was fixing database corruption, my wife ran to my home office. Here is how the conversation went: Wife: This computer does not work. I: Restart it. Wife: It does not start. I: What did you do with it? Wife: Nothing, it just stopped working. I: Okey, I will look into it later, working on the very urgent issue. Wife: I was printing my daughter’s worksheet. I: Hm.. Okey. Wife: It was the mathematics worksheet, which you promised you will teach but you never get around to do it, so I am doing it myself. I: Thanks. I appreciate it. I am very busy with this issue as million dollar transaction are not happening as the database got corrupted and … Wife: So what … umm… You mean to say that you care about this customer more than your daughter. You know she got A+ in every other class but in mathematics she got only A. She missed that extra credit question. I: She is only 4, it is okay. Wife: She is 4.5 years old not 4. So you are not going to fix this computer which does not start at all. I think our daughter next time will even get lower grades as her dad is busy fixing something. I: Alright, I give up bring me that computer. Our daughter who was listening everything so far she finally decided to speak up. Daughter: Dad, it is a laptop not computer. I: Yes, sweety get that laptop here and your dad is going to fix the this small issue of million dollar issue later on. I decided to pay attention to my wife’s computer. She was right. No matter what I do, it will not boot up, it will not start, no BIOS, no POST screen. The computer starts for a second but nothing comes up on the screen. The light indicating hard drive comes up for a second and goes off. Nothing happens. I removed every single USB drive from the laptop but it still would not start. It was indeed no fun for me. Finally I remember my days when I was not married and used to study in University of Southern California, Los Angeles. I remembered that I used to have very old second (or maybe third or fourth) hand computer with me. In polite words, I had pre-owned computer and it used to face very similar issues again and again. I had small routine I used to follow to fix my old computer and I had decided to follow the same steps again with this computer. Step 1: Remove the power cord from the laptop Step 2: Remove the battery from the laptop Step 3: Hold power button (keep it pressed) for almost 60 seconds Step 4: Plug power back in laptop Step 5: Start computer and it should just start normally. Step 6: Now shut down Step 7: Insert the battery back in the laptop Step 8: Start laptop again and it should work Note 1: If your laptop does not work after inserting back the memory. Remove the memory and repeat above process. Do not insert the battery back as it is malfunctioning. Note 2: If your screen is faulty or have issues with your hardware (motherboard, screen or anything else) this method will not fix your computer. Once I followed above process, her computer worked. I was very delighted, that now I can go back to solving the problem where millions of transactions were waiting as I was fixing corrupted database and it the current state of the database was in emergency mode. Once I fixed the computer, I looked at my wife and asked. I: Well, now this laptop is back online, can I get guaranteed that she will get A+ in mathematics in this week’s quiz? Wife: Sure, I promise. I: Fantastic. After saying that I started to look at my database corruption and my wife interrupted me again. Wife: Btw, I forgot to tell you. Our daughter had got A in mathematics last week but she had another quiz today and she already have received A+ there. I kept my promise. I looked at her and she started to walk outside room, before I say anything my phone rang. DBA from eCommerce company had called me, as he was wondering why there is no activity from my side in last 10 minutes. DBA: Hey bud, are you still connected. I see um… no activity in last 10 minutes. I: Oh, well, I was just saving the world. I am back now. After two hours I had fixed the database corruption and everything was normal. I was outsmarted by my wife but honestly I still respect and love her the same as she is the one who spends countless hours with our daughter so she does not miss me and I can continue writing blogs and keep on doing technology evangelism. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using ext4 in VMware machine

    First of all, using a journaling filesystems like NTFS, ext4, XFS, or JFS (not to name all of them) is a very good idea and nowadays unthinkable not to do. Linux offers a good variety of different option as journaling filesystem for your system. Since years I am using SGI's XFS and I am pretty confident with stability, performance and liability of the system. In earlier years I had to struggle with incompatibilities between XFS and the boot loader. Using an ext2 formatted /boot solved this issue. But, wow, that is ages ago! Lately, I had to setup a fresh Lucid Lynx (Ubuntu 10.04 LTS) system for a change of our internal groupware / messaging system. Therefore, I fired up a new virtual machine with almost standard configuration in VMware Server and run through our network-based PXE boot and installation procedure. At a certain step in this process, Ubuntu asks you about the partitioning of your hard drive(s). Honestly, I have to say that only out of curiousity I sticked to the "default" suggestion and gave my faith and trust into the Ubuntu installation routine... Resulting to have an ext4 based root mount point ( / ). The rest of the installation went on without further concerns or worries. Note:I really can't remember why I chose to go away from my favourite... Well, it should turn out to be the wrong decision after all. Ok, let's continue the story about ext4 in a VMware based virtual machine. After some hours installing additional packages and configuring the new system using LDAP for general authentication and login, I had an "out-of-the-box" usable enterprise messaging system based on Zarafa 6.40 Community Edition inclusive proper SSL-based Webaccess interface and Z-Push extension for ActiveSync with my Nokia mobile. Straightforward and pretty nice for the time spent on the setup. Having priority on other tasks I let the system just running and didn't pay any further attention at all. Until I run into an upgrade of "Mail for Exchange" on Symbian OS. My mobile did not bother me at all with the upgrade and everything went smooth, but trying to re-establish the ActiveSync connection to the Zarafa messaging system resulted in a frustating situation. So, I shifted my focus back to the Linux system and I was amazed to figure out that the root had been remounted readonly due to hard drive failures or at least ext4 reported errors. Firing up Google only confirmed my concerns and it seems that using ext4 for VMware based virtual machines does not look like a stable and reliable candidate to me. You might consider reading those external resources: ext4 fs corruption under VMWare Server 2.01Bug #389555 - ext4 filesystem corruption Well, I learned my lesson and ext{2|3|4} based filesystems are not going to be used on any of my Linux systems or customer installations in the future. Addendum: I did not try this setup in other virtualization environments like VirtualBox, qemu, kvm, Xen, etc.

    Read the article

  • nvidia 331.89 super + W causes black window

    - by Heihachi
    When i am hitting Super + W and choosing one of active window it opens. But when i am mazimizing another minimized window i am getting blank black window, if i minimize then maximize it again it works. Tried 340.24 drivers - but they are totally unusable because of interface corruption in firefox, thunderbird, libreoffice. 331.89 seems stable except this annoying black window. I am using GTX 750 ti

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >