Search Results

Search found 4432 results on 178 pages for 'fail'.

Page 81/178 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • What partition to use to keep data files in Ubuntu?

    - by Martin Lee
    I have been using Ubuntu for a few years and usually my partition set up was the following: Ext3 or Ext4 partition for the system itself (20 GB); A 10 GB swap partition; a big FAT32 partition to store movies, photos, work stuff, etc. (depends on the capacity of the disk, but usually it is what is left from Ext3+Swap, currently it is more than 200 GB). Does this setup sound right? I am considering to switching to one big Ext3 partition now, because the problem with Fat32 in Ubuntu has not gone anywhere: for example, right now I can access my 'big' partition with a 'Data' label only through /media/_themes?END. Pretty strange name for a partition, isn't it? some Linux software fail to read/write on this partition. For example, if I want to play around with rebar and build/make/compile things on this FAT32 partition, it will always complain about permissions and won't work (the same goes for many other kinds of software); it is not stable, I can not refer to some files on this FAT32 partition, because after the next reboot it will be called not '_themes?END', but something else. On the other side I usually begin to run out of space on the Ext3 partition after a few months of usage. So, the question is - what is the best setup of partitions for an Ubuntu system? Should a FAT32 partition be used at all?

    Read the article

  • Is it common to lie in job ads regarding the technologies in use?

    - by Desolate Planet
    Wanted: Experienced Delphi programmer to maintain ginormous legacy application and assist in migration to C# Later on, as the new hire settles into his role... "Oh, that C# migration? Yeah, we'd love to do that. But management is dead-set against it. Good thing you love Pascal, eh?" I've noticed quite a lot of this where I live (Scotland) and I'm not sure how common this is across IT: a company is using a legacy technology and they know that most developers will avoid them to keep mainstream technology on their resumes. So, they will put out a advertisement saying they are looking to move their product to some hip new tech (C#, Ruby, FORTRAN 99) and require someone who has exposure to both - but the migration is just a carrot on a stick, perpetually hung in front of the hungry developer as he spends each day maintaining the legacy app. I've experienced this myself, and heard far too many similar stories to the point where it seems like common practice. I've learned over time that every company has legacy problems of some sort, but I fail to see why they can't be honest about it. It should be common sense to any developer that the technology in place is there to support the business and not the other way round. Unless the technology is hurting the business in someway, I hardly see any just cause for reworking the software stack to be made up whatever is currently vogue in the industry. Would you say that this is commonplace? If so, how can I detect these kinds of leading advertisements beforehand?

    Read the article

  • OpenJDK 6 B24 Available

    - by user9158633
    On November 16, 2011 the source bundle for OpenJDK 6 b24 was published at http://download.java.net/openjdk/jdk6/. The main changes in b24 are the latest round of security updates (e.g. the security changes in jdk repo) and a few other fixes.  For more information see the detailed list of all the changes in OpenJDK 6 B24. Test Results: All the jdk regression tests run with  make test passed. cd jdk6 make make test Per Kelly's  B23 Release blog: The new process is - all the jdk regression tests run with make test should just pass. Over time we will fix the tests that have been excluded, possibly add more tests, and exclude tests that fail to demonstrate stability (with a bug filed against the test). For the current list of excluded tests see  jdk6/jdk/test/ProblemList.txt file: ProblemList.html in B24  |  Latest ProblemList.txt (in the tip revision). Special thanks to Kelly O'Hair for his direction and Dave Katleman for his Release Engineering work.

    Read the article

  • dpkg error when using apt-get install

    - by V-T
    I upgraded to Ubuntu 14.04 from 12.04 and every time I use apt-get install for any package it ends with a bunch of errors about processing some of my latex packages. Including a snippet below: Sometimes, not accepting conffile updates in /etc/texmf/updmap.d causes updmap-sys to fail. Please check for files with extension .dpkg-dist or .ucf-dist in this directory dpkg: error processing package tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of lmodern: lmodern depends on tex-common (>= 3); however: Package tex-common is not configured yet. Reproduced by using sudo dpkg --configure -a and a total list of packages with this error is included here: Errors were encountered while processing: tex-common texlive-publishers tex-gyre texlive-latex-extra-doc texlive-fonts-extra-doc texlive-lang-english texlive-luatex texlive-generic-recommended texlive-pstricks-doc texlive-fonts-recommended latex2html latex-xcolor texlive-pictures texlive-fonts-extra texlive-pictures-doc asymptote texlive-bibtex-extra texlive-latex-recommended-doc texlive-latex-recommended doxygen-latex texlive-pstricks tipa texlive-latex-base texlive-fonts-recommended-doc latex-beamer texlive-font-utils texlive-latex-base-doc texlive-latex-extra texlive-extra-utils texlive texlive-publishers-doc lmodern Any ideas on how to fix this?

    Read the article

  • How much effort should you put into a junior developer?

    - by Crazy Eddie
    At what point should one give up? I've tried helping them out by having them shadow me. We agree to break a minute, and then they go missing in action for a while...then just go back to their desk. Even when I know they've done this, part of me feels like I shouldn't have to go get them but that they should be showing interest in learning. Frankly, it's a bunch of time I don't have explaining things as I go when I could just do it. Am I expecting too much to expect that if they want to learn they'll make sure I know they're ready and willing? They go to meetings that they where not told they had to, good, but then sit in the corner and sleep...bad. I don't even know what to do with that. Sometimes I give them something small to do and they do it great, so I give them something just a touch harder and they totally fail, hard. Check in things without testing them. Part of me thinks that maybe I should be spending more time with them but at the same time I don't see a lot of interest and I really, honestly don't have time teaching the same things over and over. Sometimes I get asked questions that are really, really easy to answer if you just do a little bit of your own work trying to find out. Other times I'm not asked anything. I'm sure I could be doing better but honestly...I don't really want to anymore.

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Banshee crashes consistently - is there a fix?

    - by user36334
    Since updating to ubuntu 11.10 I've had trouble with banshee. In particular when I run it I find that it crashes within an hour without fail. I get the following Unhandled Exception: System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.NullReferenceException: Object reference not set to an instance of an object at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.DisposeResolver () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.BrowseService.Dispose () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.ServiceBrowser.OnItemRemove (Int32 interface, Protocol protocol, System.String name, System.String type, System.String domain, LookupResultFlags flags) [0x00000] in <filename unknown>:0 at (wrapper managed-to-native) System.Reflection.MonoMethod:InternalInvoke (System.Reflection.MonoMethod,object,object[],System.Exception&) at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 --- End of inner exception stack trace --- at System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) [0x00000] in <filename unknown>:0 at System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.MulticastDelegate.DynamicInvokeImpl (System.Object[] args) [0x00000] in <filename unknown>:0 at System.Delegate.DynamicInvoke (System.Object[] args) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.HandleSignal (NDesk.DBus.Message msg) [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.DispatchSignals () [0x00000] in <filename unknown>:0 at NDesk.DBus.Connection.Iterate () [0x00000] in <filename unknown>:0 at Mono.Zeroconf.Providers.AvahiDBus.DBusManager.IterateThread (System.Object o) [0x00000] in <filename unknown>:0 Does anyone else also have this problem?

    Read the article

  • OpenGL ES 2.0 texture distortion on large geometry

    - by Spruce
    OpenGL ES 2.0 has serious precision issues with texture sampling - I've seen topics with a similar problem, but I haven't seen a real solution to this "distorted OpenGL ES 2.0 texture" problem yet. This is not related to the texture's image format or OpenGL color buffers, it seems like it's a precision error. I don't know what specifically causes the precision to fail - it doesn't seem like it's just the size of geometry that causes this distortion, because simply scaling vertex position passed to the the vertex shader does not solve the issue. Here are some examples of the texture distortion: Distorted Texture (on OpenGL ES 2.0): http://i47.tinypic.com/3322h6d.png What the texture normally looks like (also on OpenGL ES 2.0): http://i49.tinypic.com/b4jc6c.png The texture issue is limited to small scale geometry on OpenGL ES 2.0, otherwise the texture sampling appears normal, but the grainy effect gradually worsens the further the vertex data is from the origin of XYZ(0,0,0) These texture issues do not occur on desktop OpenGL (works fine under Windows XP, Windows 7, and Mac OS X) I've only seen the problem occur on Android, iPhone, or WebGL(which is similar to OpenGL ES 2.0) All textures are power of 2 but the problem still occurs Scaling the vertex data - The values of a vertex's X Y Z location are in the range of: -65536 to +65536 floating point I realized this was large, so I tried dividing the vertex positions by 1024 to shrink the geometry and hopefully get more accurate floating point precision, but this didn't fix or lessen the texture distortion issue Scaling the modelview or scaling the projection matrix does not help Changing texture filtering options does not help Disabling mipmapping, or using GL_NEAREST/GL_LINEAR does nothing Enabling/disabling anisotropic does nothing The banding effect still occurs even when using GL_CLAMP Dividing the texture coords passed to the vertex shader and then multiplying them back to the correct values in the fragment shader, also does not work precision highp sampler2D, highp float, highp int - in the fragment or the vertex shader didn't change anything (lowp/mediump did not work either) I'm thinking this problem has to have been solved at one point - Seeing that OpenGL ES 2.0 -based games have been able to render large-scale, highly detailed geometry

    Read the article

  • Consecutive verse Parallel Nunit Testing

    - by Jacobm001
    My office has roughly ~300 webpages that should be tested on a fairly regular basis. I'm working with Nunit, Selenium, and C# in Visual Studio 2010. I used this framework as a basis, and I do have a few working tests. The problem I'm running into is is that when I run the entire suite. In each run, a random test(s) will fail. If they're run individually, they will all pass. My guess is that Nunit is trying to run all 7 tests at the same time and the browser can't support this for obvious reasons. Watching the browser visually, this does seem to be the case. Looking at the screenshot below, I need to figure out a way in which the tests under Index_Tests are run sequentially, not in parallel. errors: Selenium2.OfficeClass.Tests.Index_Tests.index_4: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} Selenium2.OfficeClass.Tests.Index_Tests.index_7: OpenQA.Selenium.NoSuchElementException : Unable to locate element: "method":"id","selector":"textSelectorName"} example with one test: using OpenQA.Selenium; using NUnit.Framework; namespace Selenium2.OfficeClass.Tests { [TestFixture] public class Index_Tests : TestBase { public IWebDriver driver; [TestFixtureSetUp] public void TestFixtureSetUp() { driver = StartBrowser(); } [TestFixtureTearDown] public void TestFixtureTearDown() { driver.Quit(); } [Test] public void index_1() { OfficeClass index = new OfficeClass(driver); index.Navigate("http://url_goeshere"); index.SendKeyID("txtFiscalYear", "input"); index.SendKeyID("txtIndex", ""); index.SendKeyID("txtActivity", "input"); index.ClickID("btnDisplay"); } } }

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • SMART: DISK FAILURE IS IMMINENT (under 24 hours?)

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista( I keep it just because of school). Everything was OK on both OSes,but one day on Ubuntu I was getting awkward noises from my notebooks's hard drive and then everything stops and I couldn't do anything. On Windows everything was ok. Everytime I boot on Ubuntu I can get 5 minutes of normal run, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI(Disk Utility, or something like that and from terminal). From the GUI I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks(or broken blocks, I had that test I while ago) on my HDD. From the terminal ( I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my linux apps on Windows. PLEASE HELP!!! How can I separate the bad blocks from Ubuntu so it wouldn't use them?

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • ClearTrace Supports Statement Level Events

    - by Bill Graziano
    One of the requests I get on a regular basis is to capture the performance of statement level events.  The latest beta has this feature available.  If you’re interested in this I’d like to get some feedback. I handle the SP:StmtCompleted and the SQL:StmtCompleted events.  These report CPU, reads, writes and duration. I’m not in any way saying it’s a good idea to trace these events.  Use with caution as this can make your traces much larger. If there are statement level events in the trace file they will be processed.  However the query screen displays batch level *OR* statement level events.  If it did both we’d be double counting. I don’t have very many traces with statement completed events in them.  That means I only did limited testing of how it parses these events.  It seems to work well so far though.  Your feedback is appreciated. If you ever write loops or cursors in stored procedures you’re going to get huge trace files.  Be warned. I also fixed an annoying bug where ClearTrace would fail and tell you a value had already been added.  This is a result of the collection I use being case-sensitive and SQL Server not being case-sensitive.  I thought I had properly coded around that but finally realized I hadn’t.  It should be fixed now. If you have any questions or problems the ClearTrace support forum is the best place for those.

    Read the article

  • An Interview with JavaOne Rock Star Martijn Verburg

    - by Janice J. Heiss
    An interview with JavaOne Rock Star Martijn Verburg, by yours truly, titled “Challenging the Diabolical Developer: A Conversation with JavaOne Rock Star Martijn Verburg,” is now up on otn/java. Verburg, one of the leading movers and shakers in the Java community, is well known for his ‘diabolical developer” talks at JavaOne where he uncovers some of the worst practices that Java developers are prone to. He mentions a few in the interview: * “A lack of communication: Software development is far more a social activity than a technical one; most projects fail because of communication issues and social dynamics, not because of a bad technical decision. Sadly, many developers never learn this lesson.* No source control: Some developers simply store code in local file systems and e-mail the code in order to integrate their changes; yes, this still happens.* Design-driven design: Some developers are inclined to cram every design pattern from the Gang of Four (GoF) book into their projects. Of course, by that stage, they've actually forgotten why they're building the software in the first place.” He points to a couple of core assumptions and confusions that lead to trouble: “One is that developers think that the JVM is a magic box that will clean up their memory and make their code run fast, as well as make them cups of coffee. The JVM does help in a lot of cases, but bad code can and will still lead to terrible results! The other trend is to try to force Java (the language) to do something it's not very good at, such as rapid Web development. So you get a proliferation of overly complex frameworks, libraries, and techniques trying to get around the fact that Java is a monolithic, statically typed, compiled, OO environment. It's not a Golden Hammer!” Verburg has many insightful things to say about how to keep a Java User Group (JUG) going, about the “Adopt a JSR” program, bugathons, and much more. Check out the article here.

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Why do I always get this error when using 'apt-get' commands?

    - by Venki
    I am using Ubuntu 14.04(with Unity). Just today(as of the date of this post) I did a sudo apt-get update && sudo apt-get upgrade and at the end of the 'Upgrade' process I got the following error :- Setting up crossplatformui (1.0.38) ... * Stopping ACPI services... [ OK ] * Starting ACPI services... [ OK ] package libqtgui4 exist QT_VERSION = 4 make -C /lib/modules/3.13.0-27-generic/build M=/usr/local/bin/ztemtApp/zteusbserial/below2.6.27 modules make[1]: Entering directory `/usr/src/linux-headers-3.13.0-27-generic' CC [M] /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o /usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.c:34:28: fatal error: linux/smp_lock.h: No such file or directory #include <linux/smp_lock.h> ^ compilation terminated. make[2]: *** [/usr/local/bin/ztemtApp/zteusbserial/below2.6.27/usb-serial.o] Error 1 make[1]: *** [_module_/usr/local/bin/ztemtApp/zteusbserial/below2.6.27] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.13.0-27-generic' make: *** [modules] Error 2 dpkg: error processing package crossplatformui (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) From then on whatever apt-get command I use(so far as I know, except apt-get update) I keep getting the above error at the end of the process. But whichever apt-get command I use does what it has to without fail.(For example I tried installing blender with sudo apt-get install blender and it installed fine though it showed the above error.) After this I even got a kernel update(from 3.13.0-27 to 3.13.0-29 via the Software Updater), but even now the issue persists. How do I solve this issue?

    Read the article

  • How to write your unit tests to switch between NUnit and MSTest

    - by Justin Jones
    On my current project I found it useful to use both NUnit and MsTest for unit testing. When using ReSharper for running unit tests, it just simply works better with NUnit, and on large scale projects NUnit tends to run faster. We would have just simply used NUnit for everything, but MSTest gave us a few bonuses out of the box that were hard to pass up. Namely code coverage (without having to shell out thousands of extra dollars for the privilege) and integrated tests into the build process. I’m one of those guys who wants the build to fail if the unit tests don’t pass. If they don’t pass, there’s no point in sending that build on to QA. So making the build work with MsTest is easiest if you just create a unit test project in your solution. This adds the right references and project type Guids in the project file so that everything just automagically just works. Then (using NuGet of course) you add in NUnit. At the top of your test file, remove the using statements that refer to MsTest and replace it with the following: #if NUNIT using NUnit.Framework; #else using TestFixture = Microsoft.VisualStudio.TestTools.UnitTesting.TestClassAttribute; using Test = Microsoft.VisualStudio.TestTools.UnitTesting.TestMethodAttribute; using TestFixtureSetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using SetUp = Microsoft.VisualStudio.TestTools.UnitTesting.TestInitializeAttribute; using Microsoft.VisualStudio.TestTools.UnitTesting; #endif Basically I’m taking the NUnit naming conventions, and redirecting them to MsTest. You can go the other way, of course. I only chose this direction because I had already written the tests as NUnit tests. NUnit and MsTest provide largely the same functionality with slightly differing class names. There’s few actual differences between then, and I have not run into them on this project so far. To run the tests as NUnit tests, simply open up the project properties tab and add the compiler directive NUNIT. Remove it, and you’re back in MsTest land.

    Read the article

  • Cinnamon is broken after upgrade to 13.10

    - by user2306488
    I see reports of people with Unity broken after upgrading to 13.10. In my case Unity works fine but cinnamon is broken. It opens the startup applications but no window manager, no menus and the keyboad shortcuts won't work. As a consequence I can't even log out or shut down cleanly. The logs say: Oct 19 10:32:42 Aveline colord: Profile added: icc-1727cc5030c477b20ad75593e757248d Oct 19 10:32:43 Aveline gnome-session[9157]: WARNING: App 'cinnamon.desktop' exited with code 1 Oct 19 10:32:43 Aveline gnome-session[9157]: WARNING: App 'cinnamon.desktop' respawning too quickly Oct 19 10:32:43 Aveline gnome-session[9157]: CRITICAL: We failed, but the fail whale is dead. Sorry.... Oct 19 10:32:43 Aveline gnome-session[9157]: WARNING: App 'cinnamon.desktop' exited with code 1 Oct 19 10:32:46 Aveline whoopsie[1054]: online Oct 19 10:32:53 whoopsie[1054]: last message repeated 12 times Oct 19 10:32:53 Aveline kernel: [ 1982.637049] python[9626]: segfault at 1511 ip b6c9e850 sp bf8d0980 error 4 in libglib-2.0.so.0.3800.0[b6c5b000+102000] Oct 19 10:32:53 Aveline kernel: [ 1982.837527] python[9631]: segfault at 0 ip b6eb13fa sp b69ff848 error 6 in libdbus-1.so.3.7.4[b6e89000+49000] Oct 19 10:32:54 Aveline kernel: [ 1983.030271] python[9634]: segfault at a6f4098b ip b6e52389 sp bfcdad68 error 4 in libdbus-1.so.3.7.4[b6e34000+49000] Oct 19 10:32:54 Aveline kernel: [ 1983.253259] python[9639]: segfault at 4 ip b6e710f4 sp b69c1bfc error 6 in libdbus-1.so.3.7.4[b6e4b000+49000] Oct 19 10:32:54 Aveline kernel: [ 1983.501771] python[9642]: segfault at b4 ip b6e0f076 sp bf82524c error 4 in libdbus-1.so.3.7.4[b6dfd000+49000] Oct 19 10:32:54 Aveline kernel: [ 1983.721334] python[9647]: segfault at 4 ip b6eab0f4 sp b69fbbfc error 6 in libdbus-1.so.3.7.4[b6e85000+49000] Any idea?

    Read the article

  • Do programmers need a union? [closed]

    - by James A. Rosen
    In light of the acrid responses to the intellectual property clause discussed in my previous question, I have to ask: why don't we have a programmers' union? There are many issues we face as employees, and we have very little ability to organize and negotiate. Could we band together with the writers', directors', or musicians' guilds, or are our needs unique? Has anyone ever tried to start one? If so, why did it fail? (Or, alternatively, why have I never heard of it, despite its success?) later: Keith has my idea basically right. I would also imagine the union being involved in many other topics, including: legal liability for others' use/misuse of our work, especially unintended uses evaluating the quality of computer science and software engineering higher education programs -- unlike many other engineering disciplines, we are not required to be certified on receiving our Bachelor's degrees evangelism and outreach -- especially to elementary school students certification -- not doing it, but working with the companies like ISC(2) and others to make certifications meaningful and useful continuing education -- similar to previous conferences -- maintain a go-to list of organizers and other resources our members can use I would see it less so as a traditional trade union, with little emphasis on: pay -- we tend to command fairly good salaries outsourcing and free trade -- most of use tend to be pretty free-market oriented working conditions -- we're the only industry with Aeron chairs being considered anything like "standard"

    Read the article

  • Am I wrong to disagree with A Gentle Introduction to symfony's template best practices?

    - by AndrewKS
    I am currently learning symfony and going through the book A Gentle Introduction to symfony and came across this section in "Chapter 4: The Basics of Page Creation" on creating templates (or views): "If you need to execute some PHP code in the template, you should avoid using the usual PHP syntax, as shown in Listing 4-4. Instead, write your templates using the PHP alternative syntax, as shown in Listing 4-5, to keep the code understandable for non-PHP programmers." Listing 4-4 - The Usual PHP Syntax, Good for Actions, But Bad for Templates <p>Hello, world!</p> <?php if ($test) { echo "<p>".time()."</p>"; } ?> (The ironic thing about this is that the echo statement would look even better if time was a variable declared in the controller because then you could just embed the variable in the string instead of concatenating) Listing 4-5 - The Alternative PHP Syntax, Good for Templates <p>Hello, world!</p> <?php if ($test): ?> <p><?php echo time(); ?> </p><?php endif; ?> I fail to see how listing 4-5 makes the code "understandable for non-PHP programmers", and its readability is shaky at best. 4-4 looks much more readable to me. Are there any programmers who are using symfony that write their templates like those in 4-4 rather than 4-5? Are there reasons I should use one over the other? There is the very slim chance that somewhere down the road someone less technical could be editing it the template, but how does 4-5 actually make it more understandable to them?

    Read the article

  • SMART says disk failure is imminent due to bad blocks, what do I need to do?

    - by flix
    I have on my hard drive 2 OSes: Ubuntu 12.04 and Windows Vista (I keep it just because of school). Everything was OK on both OSes, but one day on Ubuntu I was getting awkward noises from my notebooks' hard drive and then everything stopped and I couldn't do anything. On Windows everything was OK. Every time I boot Ubuntu I can get 5 minutes normal run time, without problems. After that the hard drive sounds crazy and nothing works. I could run S.M.A.R.T tests from a older Ubuntu CD (10.04) from the GUI (Disk Utility, or something like that and from terminal). From the GUI, I got that the DISK FAILURE IS IMMINENT and I have ~700 bad blocks (or broken blocks, I had that test I while ago) on my HDD. From the terminal (I don't remember if it was fsck or a SMART test command) I got that the HDD will fail in under 24 hours. Since then it passed 2-3 weeks. I've tried "badblocks" but after 10 hours it was still running and I had to stop it. Now I have to use cygwin and other alternatives for my Linux apps on Windows. How can I separate the bad blocks from Ubuntu so it wouldn't use them? Please help.

    Read the article

  • How to move ubuntu 12.04 on another drive

    - by Maksim
    How I can move my ubuntu on another drive? I know about clonezilla but problem is that destination drive is smaller the source one. Gparted can't copy-paste partition if destination not the end last partition. I tried dpkg --selected-packages and apt-clone. First one just not install all my packages and removed existed that now I have no full unity and not my all packages. Second one just fail on configuration package. But before I did that way I copy-paste my /etc to new system. My partition table destination : gpt 1 1049kB 106MB 105MB fat32 EFI System ??????????? 2 106MB 12,1GB 12,0GB ext4 3 12,1GB 66,3GB 54,2GB ext4 source: msdos 1 1049kB 12,0GB 12,0GB primary ext4 ??????????? 2 12,0GB 492GB 480GB primary ext4 3 492GB 500GB 8107MB primary linux-swap(v1) Gpt not working with ubuntu that use grub 1.99. I don't know why but my laptop can't boot any device with uefi just black screen and ubuntu detect it on fresh install.

    Read the article

  • pros-cons of separate hosting accounts versus using addon domain

    - by hen3ry
    Folks: For historical reasons, I have "Site A" on "Hosting Account A", and "Site B" on "Account B", totally independent accounts with the same vendor, Bluehost. Both are primary domains. Now that Hosting Account B is just about to expire, I'm considering letting it disappear and moving Site B to an Addon domain on "Account A". Both sites are non-commercial, narrow-interest, very-low-traffic, hundreds of page views per month. The file weights for the sites are non-trivial, especially as I like to install specialized CMSs in subdomains. Since Bluehost allows unlimited hosting space there should be no issue with the file load, except I've seen hints of an issue with total file count, maybe 50k files -- which I'm not currently close to hitting, but might eventually. My question: what are the pros and cons of using separate accounts versus hosting Site B as an addon domain? Obviously, using a single account is cheaper by half, and I know that my authoring environment (DreamWeaver CS5) complains when it detects nested source trees, telling me "Synchronization" might fail in such cases, but I don't depend on this feature. What other factors should I consider? TIA

    Read the article

  • Erlang node acts like it connects, but doesn't [migrated]

    - by Malfist
    I'm trying to setup a distributed network of nodes across a few firewalls and it's not going so well. My application is structured like this: there is a central server that always running a node ([email protected]) and my co-worker's laptops connect to it on startup. This works if we're all in the office, but if someone is at home, they can connect to the masternode, but they fail to connect to the other nodes in the swarm. I.E., erlang fails to gossip correctly. To correct this, I've change epmd's port number and changed the inet_dist_listen ports to a known open port (1755 and 7070 respectively). However, something fishy is going on. I can run net_adm:world() and it reports that it connects to master node, but when I run nodes() I get an empty array. Same with net_adm:ping('[email protected]'). See: Eshell V5.9 (abort with ^G) ([email protected])1> net_adm:world(). ['[email protected]'] ([email protected])2> nodes(). [] ([email protected])3> net_adm:ping('[email protected]'). pong ([email protected])4> nodes(). [] ([email protected])5> What's going on, and how can I fix it?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >