Search Results

Search found 15994 results on 640 pages for 'accuracy problems'.

Page 124/640 | < Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >

  • Developing Schema Compare for Oracle (Part 5): Query Snapshots

    - by Simon Cooper
    If you've emailed us about a bug you've encountered with the EAP or beta versions of Schema Compare for Oracle, we probably asked you to send us a query snapshot of your databases. Here, I explain what a query snapshot is, and how it helps us fix your bug. Problem 1: Debugging users' bug reports When we started the Schema Compare project, we knew we were going to get problems with users' databases - configurations we hadn't considered, features that weren't installed, unicode issues, wierd dependencies... With SQL Compare, users are generally happy to send us a database backup that we can restore using a single RESTORE DATABASE command on our test servers and immediately reproduce the problem. Oracle, on the other hand, would be a lot more tricky. As Oracle generally has a 1-to-1 mapping between instances and databases, any databases users sent would have to be restored to their own instance. Furthermore, the number of steps required to get a properly working database, and the size of most oracle databases, made it infeasible to ask every customer who came across a bug during our beta program to send us their databases. We also knew that there would be lots of issues with data security that would make it hard to get backups. So we needed an easier way to be able to debug customers issues and sort out what strange schema data Oracle was returning. Problem 2: Test execution time Another issue we knew we would have to solve was the execution time of the tests we would produce for the Schema Compare engine. Our initial prototype showed that querying the data dictionary for schema information was going to be slow (at least 15 seconds per database), and this is generally proportional to the size of the database. If you're running thousands of tests on the same databases, each one registering separate schemas, not only would the tests would take hours and hours to run, but the test servers would be hammered senseless. The solution To solve these, we needed to be able to populate the schema of a database without actually connecting to it. Well, the IDataReader interface is the primary way we read data from an Oracle server. The data dictionary queries we use return their data in terms of simple strings and numbers, which we then process and reconstruct into an object model, and the results of these queries are identical for identical schemas. So, we can record the raw results of the queries once, and then replay these results to construct the same object model as many times as required without needing to actually connect to the original database. This is what query snapshots do. They are binary files containing the raw unprocessed data we get back from the oracle server for all the queries we run on the data dictionary to get schema information. The core of the query snapshot generation takes the results of the IDataReader we get from running queries on Oracle, and passes the row data to a BinaryWriter that writes it straight to a file. The query snapshot can then be replayed to create the same object model; when the results of a specific query is needed by the population code, we can simply read the binary data stored in the file on disk and present it through an IDataReader wrapper. This is far faster than querying the server over the network, and allows us to run tests in a reasonable time. They also allow us to easily debug a customers problem; using a simple snapshot generation program, users can generate a query snapshot that could be sent along with a bug report that we can immediately replay on our machines to let us debug the issue, rather than having to obtain database backups and restore databases to test systems. There are also far fewer problems with data security; query snapshots only contain schema information, which is generally less sensitive than table data. Query snapshots implementation However, actually implementing such a feature did have a couple of 'gotchas' to it. My second blog post detailed the development of the dependencies algorithm we use to ensure we get all the dependencies in the database, and that algorithm uses data from both databases to find all the needed objects - what database you're comparing to affects what objects get populated from both databases. We get information on these additional objects using an appropriate WHERE clause on all the population queries. So, in order to accurately replay the results of querying the live database, the query snapshot needs to be a snapshot of a comparison of two databases, not just populating a single database. Furthermore, although the code population queries (eg querying all_tab_cols to get column information) can simply be passed straight from the IDataReader to the BinaryWriter, we need to hook into and run the live dependencies algorithm while we're creating the snapshot to ensure we get the same WHERE clauses, and the same query results, as if we were populating straight from a live system. We also need to store the results of the dependencies queries themselves, as the resulting dependency graph is stored within the OracleDatabase object that is produced, and is later used to help order actions in synchronization scripts. This is significantly helped by the dependencies algorithm being a deterministic algorithm - given the same input, it will always return the same output. Therefore, when we're replaying a query snapshot, and processing dependency information, we simply have to return the results of the queries in the order we got them from the live database, rather than trying to calculate the contents of all_dependencies on the fly. Query snapshots are a significant feature in Schema Compare that really helps us to debug problems with the tool, as well as making our testers happier. Although not really user-visible, they are very useful to the development team to help us fix bugs in the product much faster than we otherwise would be able to.

    Read the article

  • Breaking 1NF to model subset constraints. Does this sound sane?

    - by Chris Travers
    My first question here. Appologize if it is in the wrong forum but this seems pretty conceptual. I am looking at doing something that goes against conventional wisdom and want to get some feedback as to whether this is totally insane or will result in problems, so critique away! I am on PostgreSQL 9.1 but may be moving to 9.2 for this part of this project. To re-iterate: Does it seem sane to break 1NF in this way? I am not looking for debugging code so much as where people see problems that this might lead. The Problem In double entry accounting, financial transactions are journal entries with an arbitrary number of lines. Each line has either a left value (debit) or a right value (credit) which can be modelled as a single value with negatives as debits and positives as credits or vice versa. The sum of all debits and credits must equal zero (so if we go with a single amount field, sum(amount) must equal zero for each financial journal entry). SQL-based databases, pretty much required for this sort of work, have no way to express this sort of constraint natively and so any approach to enforcing it in the database seems rather complex. The Write Model The journal entries are append only. There is a possibility we will add a delete model but it will be subject to a different set of restrictions and so is not applicable here. If and when we allow deletes, we will probably do them using a simple ON DELETE CASCADE designation on the foreign key, and require that deletes go through a dedicated stored procedure which can enforce the other constraints. So inserts and selects have to be accommodated but updates and deletes do not for this task. My Proposed Solution My proposed solution is to break first normal form and model constraints on arrays of tuples, with a trigger that breaks the rows out into another table. CREATE TABLE journal_line ( entry_id bigserial primary key, account_id int not null references account(id), journal_entry_id bigint not null, -- adding references later amount numeric not null ); I would then add "table methods" to extract debits and credits for reporting purposes: CREATE OR REPLACE FUNCTION debits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount < 0 THEN $1.amount * -1 ELSE NULL END; $$; CREATE OR REPLACE FUNCTION credits(journal_line) RETURNS numeric LANGUAGE sql IMMUTABLE AS $$ SELECT CASE WHEN $1.amount > 0 THEN $1.amount ELSE NULL END; $$; Then the journal entry table (simplified for this example): CREATE TABLE journal_entry ( entry_id bigserial primary key, -- no natural keys :-( journal_id int not null references journal(id), date_posted date not null, reference text not null, description text not null, journal_lines journal_line[] not null ); Then a table method and and check constraints: CREATE OR REPLACE FUNCTION running_total(journal_entry) returns numeric language sql immutable as $$ SELECT sum(amount) FROM unnest($1.journal_lines); $$; ALTER TABLE journal_entry ADD CONSTRAINT CHECK (((journal_entry.running_total) = 0)); ALTER TABLE journal_line ADD FOREIGN KEY journal_entry_id REFERENCES journal_entry(entry_id); And finally we'd have a breakout trigger: CREATE OR REPLACE FUNCTION je_breakout() RETURNS TRIGGER LANGUAGE PLPGSQL AS $$ BEGIN IF TG_OP = 'INSERT' THEN INSERT INTO journal_line (journal_entry_id, account_id, amount) SELECT NEW.id, account_id, amount FROM unnest(NEW.journal_lines); RETURN NEW; ELSE RAISE EXCEPTION 'Operation Not Allowed'; END IF; END; $$; And finally CREATE TRIGGER AFTER INSERT OR UPDATE OR DELETE ON journal_entry FOR EACH ROW EXECUTE_PROCEDURE je_breaout(); Of course the example above is simplified. There will be a status table that will track approval status allowing for separation of duties, etc. However the goal here is to prevent unbalanced transactions. Any feedback? Does this sound entirely insane? Standard Solutions? In getting to this point I have to say I have looked at four different current ERP solutions to this problems: Represent every line item as a debit and a credit against different accounts. Use of foreign keys against the line item table to enforce an eventual running total of 0 Use of constraint triggers in PostgreSQL Forcing all validation here solely through the app logic. My concerns are that #1 is pretty limiting and very hard to audit internally. It's not programmer transparent and so it strikes me as being difficult to work with in the future. The second strikes me as being very complex and required a series of contraints and foreign keys against self to make work, and therefore it strikes me as complex, hard to sort out at least in my mind, and thus hard to work with. The fourth could be done as we force all access through stored procedures anyway and this is the most common solution (have the app total things up and throw an error otherwise). However, I think proof that a constraint is followed is superior to test cases, and so the question becomes whether this in fact generates insert anomilies rather than solving them. If this is a solved problem it isn't the case that everyone agrees on the solution....

    Read the article

  • Soft lockup after upgrade - cannot install from live CD

    - by nbm
    I dual-boot MacIntel Core 2 duo. nVidia graphics. Ran upgrade from ubuntu 13.10 to 14.04 (64 bit). On restart ran into {numbers} Bug: soft lockup - CPU#0 stuck for 22s! [swapper/0:1] Tried loading earlier kernel: same problem Tried re-installing ubuntu from a liveCD that has worked in the past: version 13.04. Same problem. Tried re-partitioning hard drive using Mac OS X disk utility and then installing ubuntu 14.04LTS from liveCD. Same problem. Not possible to verify liveCD disk (creates same "soft lockup" bug.) Tried installing from the liveCD with version 13.04 that I know works (that's how I got Ubuntu on this machine in the first place.) Same problem. I know this is not a hardware problem as OS X works just fine, I am using it right now on the same machine. I have been using various versions of Ubuntu for 2 years. Things I cannot do: Open a terminal Verify CD image Start ubuntu from CD (same soft lockup problem) This problem is similar to some other questions, none of which have been satisfactorily answered: Ubuntu 14.04 soft lockup on Vostro 3500 Cannot do fresh install of Ubuntu 13.04 while booting from DVD: "soft lockup" bug Live CD stalls when installing Ubuntu 13.10 UPDATE 6/11/14: Following some much-appreciated advice from bain (see below) I burned a 12.04LTS disk and started with kernel parameters: noapic, no1apic, acpi=off, nomodeset, elevator=deadline, and clocksource=jiffies. With all of these parameters I was able to load the 12.04LTS CD ("Try without installing"). It worked fine. However, as soon as I tried to install Ubuntu from the CD, my wired ethernet (eth0) connection would hang. There are already various askubuntu questions and bug reports about this problem, none of which had answers for me. (E.g., dhclient eth0 does nothing, none of the various reset commands does anything, manually setting IP &etc does nothing. I could reliably kill the ethernet connection by clicking "install ubuntu" every single time.) I could go ahead and install 12.04 without an internet connection, but the install would freeze after mostly completing (I tried several times.) There were some relevant error messages in the details of the install output script that, IIRC, had to do with searching for missing files and not being able to access eth0 (internet) to get them. To be honest I gave up at that point and I'm not sure I wrote those down. If I find some notes I will post them. At this point I no longer have Ubuntu on my system. I wiped the partitions and am using exclusively OS X. I am leaving this question in case it helps anyone else with similar problems. I love open source and I love Linux, and the next machine I get I will probably just build from Arch. At the moment I miss repositories and a lot of other things about Ubuntu, but the OS X terminal is 'nix, I can pretty much use all the open source apps I like, and while I am not a fan of the Apple software it gets the job done for me. Unlike Ubuntu, which can't even install. I realize this isn't necessarily a place for a soapbox speech, but when I first installed 12.04 several years ago there were already people in the community complaining that Canonical was going too "commercial". But I loved it. Several years later and all I've seen is Canonical adding more not-so-useful bells and whistles to Ubuntu while continually failing to fix basic problems on upgrades. With a dual-boot (and sometimes triple-boot) system it always took me some tweaking to get an upgrade to work, and to some extent that is okay. But at this point I feel like Canonical ought to just put a price tag on Ubuntu. All I see is more commercialism and advertising and product tie-ins, and ongoing problems do not get fixed. I am a big fan of open-source, not-for profit enterprise. I am also a big fan of for-profit enterprise, which certainly has its place and usefulness. I am not a fan of companies who pretend to be in favor of open source but really are just out to make a buck, and IMNSHO that is what Canonical has become. This is a great community and I wish you all the best, but my next install of Linux will not be Ubuntu.

    Read the article

  • No More NCrunch For Me

    - by Steve Wilkes
    When I opened up Visual Studio this morning, I was greeted with this little popup: NCrunch is a Visual Studio add-in which runs your tests while you work so you know if and when you've broken anything, as well as providing coverage indicators in the IDE and coverage metrics on demand. It recently went commercial (which I thought was fair enough), and time is running out for the free version I've been using for the last couple of months. From my experiences using NCrunch I'm going to let it expire, and go about my business without it. Here's why. Before I start, let me say that I think NCrunch is a good product, which is to say it's had a positive impact on my programming. I've used it to help test-drive a library I'm making right from the start of the project, and especially at the beginning it was very useful to have it run all my tests whenever I made a change. The first problem is that while that was cool to start with, it’s recently become a bit of a chore. Problems Running Tests NCrunch has two 'engine modes' in which it can run tests for you - it can run all your tests when you make a change, or it can figure out which tests were impacted and only run those. Unfortunately, it became clear pretty early on that that second option (which is marked as 'experimental') wasn't really working for me, so I had to have it run everything. With a smallish number of tests and while I was adding new features that was great, but I've now got 445 tests (still not exactly loads) and am more in a 'clean and tidy' mode where I know that a change I'm making will probably only affect a particular subset of the tests. With that in mind it's a bit of a drag sitting there after I make a change and having to wait for NCrunch to run everything. I could disable it and manually run the tests I know are impacted, but then what's the point of having NCrunch? If the 'impacted only' engine mode worked well this problem would go away, but that's not what I found. Secondly, what's wrong with this picture? I've got 445 tests, and NCrunch has queued 455 tests to run. So it's queued duplicate tests - in this quickly-screenshotted case 10, but I've seen the total queue get up over 600. If I'm already itchy waiting for it to run all my tests against a change I know only affects a few, I'm even itchier waiting for it to run a lot of them twice. Problems With Code Coverage NCrunch marks each line of code with a dot to say if it's covered by tests - a black dot says the line isn't covered, a red dot says it's covered but at least one of the covering tests is failing, and a green dot means all the covering tests pass. It also calculates coverage statistics for you. Unfortunately, there's a couple of flaws in the coverage. Firstly, it doesn't support ExcludeFromCodeCoverage attributes. This feature has been requested and I expect will be included in a later release, but right now it doesn't. So this: ...is counted as a non-covered line, and drags your coverage statistics down. Hmph. As well as that, coverage of certain types of code is missed. This: ...is definitely covered. I am 100% absolutely certain it is, by several tests. NCrunch doesn't pick it up, down go my coverage statistics. I've had NCrunch find genuinely uncovered code which I've been able to remove, and that's great, but what's the coverage percentage on this project? Umm... I don't know. Conclusion None of these are major, tool-crippling problems, and I expect NCrunch to get much better in future releases. The current version has some great features, like this: ...that's a line of code with a failing test covering it, and NCrunch can run that failing test and take me to that line exquisitely easily. That's awesome! I'd happily pay for a tool that can do that. But here's the thing: NCrunch (currently) costs $159 (about £100) for a personal licence and $289 (about £180) for a commercial one. I'm not sure which one I'd need as my project is a personal one which I'm intending to open-source, but I'm a professional, self-employed developer, but in any case - that seems like a lot of money for an imperfect tool. If it did everything it's advertised to do more or less perfectly I'd consider it, but it doesn't. So no more NCrunch for me.

    Read the article

  • The Dreaded Startup Repair Loop on Win 7

    - by HighAltitudeCoder
    For most people, upgrading to Windows 7 has been a relatively painless process.  Not me.  I am in the unlucky 1% or less who had a somewhat less pleasant experience.  First, I cloned my entire onto a larger (and much faster) solid state hard drive, only experiencing minimal problems. Then, I bought the Retail version of Windows 7 Ultimate, took a deep breath and... oh yeah, I almost forgot - BACK UP THE COMPUTER.  The next morning I upgraded to Win 7 and everything seemed fine, until... I rebooted the system, the nice Windows 7 launch graphics come up, it's about to launch and AWWW, are you kidding me?!?!  Back to the BIOS splash screen?  Next comes the sequence of failure - attempt repair - unable to repair - do you want to wipe your hard drive decisions. Because I purchased the retail version, a number is provided where I could call Microsoft Tech support.  When I did, they instructed me to click "Install" from my installation CD, which did not work.  When I tried the "Upgrade" option, it reaches an impasse, telling you that yoiu have a newer version of Win 7, and thus cannot Upgrade.  If you choose "Install" you willl lose everything... files, programs, EVERYTHING.  Or at least this is what it tells you.  I was not willing to take the risk. To make things worse, I had installed a new antivirus software application before I realized my system was unstable (Trend Micro Titanium Internet Security), and this was causing additional problems. One interesting thing, and the only saving grace as it turns out, was that my system WOULD successfully reboot into the OS if I chose to restart it, rather than shut it down.  If I chose to shut down, I would have to go through the loop again until I was given the option to restart. As it turned out, I needed to update my BIOS.  I assumed that since I had updated my BIOS a long time ago to settings that were stable under Windows Vista Ultimate x64, I incorrectly expected Win 7 to adopt the same settings and didn't expect there to be any problems.  WRONG. My BIOS had a setting to halt the boot cycle if various kinds of errors were detected.  Windows Vista didn't care about this, but forget it under Windows 7.  I turned immediately corrected that BIOS setting.  Next, there were the two separate BIOS settings: enable USB mouse and enable USB keyboard.  The only sequence of events that would work were to start my reboot process over from stratch with a hard-wired non-usb keyboard and mouse.  Whent the system booted under these settings, it doesn't detect any errors due to either the mouse or keyboard, and actually booted for the first time in a long while (let me tell ya, that's an amazing experience after fiddling with settings for two entire weekends!) Next step: leave your old mouse and keyboard connected, but also connect your other two devices (mouse, keyboard) that use USB connections.  During the boot cycle, the operating system will not fail due to missing requirements during startup, and it will then pick up the new drivers necessary to use your new hardware. If you think you are in the clear here, you are wrong.  The next VERY IMPORTANT step is to remember to change your settings in the BIOS upon next startup.  Specifically, yoiu will need ot change your BIOS to enable USB mouse and enable USB keyboard input.  If you don't, Windows will detect an incompatibility upon the next startup, and you will be stuck once again in the endless cycle of reboot/Startup Repair/reboot/Startup Repair, without ever reaching a successful boot. Here's the thing - the BIOS and the drivers registered in Win 7 need to match.  If they don't, you're going to lose another weekend worrying and fiddling, all the while wondering if you've permanently damaged your hard drive beyond repair. (Sigh).  In the end, things worked out.  I must note that it is saddening to see how many posts there are out there that recommend just doing a clean install, as if it's the only option.  How many countless poor souls have lost their data, their backups, their pictures and videos, all for nothing other than the fact that the person giving advice just didn't know what to do at that point? My advice to you, try having a look at your BIOS settings first and making sure Win 7 can find your BIOS settings, and also disabling in your BIOS anything that might halt your system boot-up process if it encounters errors.

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • Tips / techniques for high-performance C# server sockets

    - by McKenzieG1
    I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance. Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355. Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side. Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.) Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems. Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well. Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.

    Read the article

  • Cast error on SQLDataReader (entlib 5.0, asp.net 3.5, vb)

    - by Phil
    My site is using enterprise library v 5.0. Mainly the DAAB. Some functions such as executescalar, executedataset are working as expected. The problems appear when I start to use Readers I have this function in my includes class: Public Function AssignedDepartmentDetail(ByVal Did As Integer) As SqlDataReader Dim reader As SqlDataReader Dim Command As SqlCommand = db.GetSqlStringCommand("select seomthing from somewhere where something = @did") db.AddInParameter(Command, "@did", Data.DbType.Int32, Did) reader = db.ExecuteReader(Command) reader.Read() Return reader End Function This is called from my aspx.vb like so: reader = includes.AssignedDepartmentDetail(Did) If reader.HasRows Then TheModule = reader("templatefilename") PageID = reader("id") Else TheModule = "#" End If This gives the following error on db.ExecuteReader line: Unable to cast object of type 'Microsoft.Practices.EnterpriseLibrary.Data.RefCountingDataReader' to type 'System.Data.SqlClient.SqlDataReader'. Can anyone shed any light on how I go about getting this working. Will I always run into problems when dealing with readers via entlib?

    Read the article

  • CSS issue with elements spanning columns

    - by bigFoot
    Hi folks. Overview: I'm trying to create a relatively simple page layout detailed below and running into problems no matter how I try to approach it. Concept: - A standard-size-block layout. I'll quote unit widths: each content block is 240px square with 5px of margin around it. - A left column of fixed width of 1 unit (245px - 1 block + margin to left). No problems here. - A right column of variable width to fill the remaining space. No problems here either. - In the left column, a number of 1unit x 1unit blocks fixed down the column. Also some blank space at the top - again, not a problem. - In the right column: a number of free-floating blocks of standard unit-sizes which float around and fill the space given to them by the browser window. No problems here. - Lastly, a single element, 2 units wide, which sits half in the left column and half in the right column, and which the blocks in the right column still float around. Here be dragons. Please see here for a diagram: http://is.gd/bPUGI Problem: No matter how I approach this, it goes wrong. Below is code for my existing attempt at a solution. My current problem is that the 1x1 blocks on the right do not respect the 2x1 block, and as a result half of the 2x1 block is overwritten by a 1x1 block in the right-hand column. I'm aware that this is almost certainly an issue with position: absolute taking things out of flow. However, can't really find a way round that which doesn't just throw up another problem instead. Code: <html> <head> <title>wat</title> <style type="text/css"> body { background: #ccc; color: #000; padding: 0px 5px 5px 0px; margin: 0px; } #leftcol { width: 245px; margin-top: 490px; position: absolute; } #rightcol { left: 245px; position: absolute; } #bigblock { float: left; position: relative; margin-top: -240px; background: red; } .cblock { margin: 5px 0px 0px 5px; float: left; overflow: hidden; display: block; background: #fff; } .w1 { width: 240px; } .w2 { width: 485px; } .l1 { height: 240px; } </head> <body> <div class="cblock w2 l1" id="bigblock"> <h1>DRAGONS</h1> <p>Here be they</p> </div> <div id="leftcol"> <div class="cblock w1 l1"> <h1>Left 1</h1> <p>1x1 block</p> </div> </div> <div id="rightcol"> <div class="cblock w1 l1"> <h1>Right 1</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 2</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 3</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 4</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 5</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 6</h1> <p>1x1 block</p> </div> <div class="cblock w1 l1"> <h1>Right 7</h1> <p>1x1 block</p> </div> </div> </body> </html> Constraints: One final note that I need cross-browser compatibility, though I'm more than happy to enforce this with JS if necessary. That said, if a CSS-only solution exists, I'd be extremely happy. Thanks in advance!

    Read the article

  • Django/PIL Error - Caught an exception while rendering: The _imagingft C module is not installed

    - by kenok
    I'm trying to run a webapp/site on my machine, it's running on OSX 10.6.2 and I'm having some problems: Caught an exeption while rending: The _imagingft C module is not installed Doing import _imagingft in python gives me this: >>> import _imagingft Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: dlopen(/Library/Python/2.6/site-packages/PIL/_imagingft.so, 2): Symbol not found: _FT_Done_Face Referenced from: /Library/Python/2.6/site-packages/PIL/_imagingft.so Expected in: flat namespace in /Library/Python/2.6/site-packages/PIL/_imagingft.so It's seems that the Freetype library is the one having problems. No errors so far when installing PIL or when I compiled(?) the jpeg and freetype libraries so far. I'm on django 1.1.1, python 2.6.2.

    Read the article

  • Java Simple WGS84 Lat Lon to Pixel X, Y

    - by Cnich
    I've read a multitude of information regarding map projection today. The amount of information available is overwhelming. I am attempting to simply convert lat, long values into a screen X, Y coordinate not using any map. I do not need the values projected onto any map, just on the window. The window itself is representing approx. a 1500x1500 meter location. Lat, Long accuracy needed is to a 1/10th of a second. What may be some simpler ways in converting lat/long representation to the screen? I've read several articles and post regarding translation onto images, but nothing related to the natural java coordinate system. Thanks for any insight.

    Read the article

  • What useful things could a javaScript library provide?

    - by Delan Azabani
    In many of my answers I repeatedly urge users not to use JavaScript libraries like jQuery. I even wrote a blog post about the problems that using a library create. Some of these problems include holding back native standards development, keeping users comfortably using IE, and abstracting the developer from real JavaScript. If a site doesn't require IE as part of its audience, then how are libraries useful? The other popular browsers share extremely similar implementations and work well with things like JavaScript 1.6 arrays and AJAX. This is not a troll question, I'm truly wondering what they're useful for.

    Read the article

  • Copying the bitmap contents of a UIView's context to that of another UIView

    - by Joonas Trussmann
    Basically what I want to do is copy the already rendered content (a PDF drawn into the UIView's graphics context using CGContextDrawPDFPage()) onto a similar UIView, without having to re render the PDF. The idea is, that I'd then be able to perform an animated transform on the UIView and later re render the PDF with more accuracy. For both UIViews I'm using a larger-than-screen CATiledLayer to make it easier to rerender the PDF once the user zooms in, if that makes any difference. Any tips? I'm kind of lost here.

    Read the article

  • Error while reading from spreadsheet in Ruby

    - by intern
    I am getting following error while reading from a spreadsheet file. I have searched alot on Google. I have seen posts with similar problems but no reply to the problems. Does anyone know how to resolve this error? C:/Ruby/lib/ruby/gems/1.8/gems/ruby-ole-1.2.8.2/lib/ole/storage/file_system.rb:125:in `dirent_from_path': No such file or directory - Workbook (Errno::ENOENT) from C:/Ruby/lib/ruby/gems/1.8/gems/ruby-ole-1.2.8.2/lib/ole/storage/file_system.rb:158:in `open' from C:/Ruby/lib/ruby/gems/1.8/gems/spreadsheet-0.6.3/lib/spreadsheet/excel/reader.rb:1060:in `setup' from C:/Ruby/lib/ruby/gems/1.8/gems/spreadsheet-0.6.3/lib/spreadsheet/excel/reader.rb:118:in `read' from C:/Ruby/lib/ruby/gems/1.8/gems/spreadsheet-0.6.3/lib/spreadsheet/excel/workbook.rb:32:in `open' from C:/Ruby/lib/ruby/gems/1.8/gems/spreadsheet-0.6.3/lib/spreadsheet.rb:62:in `open' from C:/Ruby/lib/ruby/gems/1.8/gems/spreadsheet-0.6.3/lib/spreadsheet.rb:68:in `open' from Fluent_search.rb:90:in `initialize' from Fluent_search.rb:77:in `new' from Fluent_search.rb:77:in `display_from' from Fluent_search.rb:97

    Read the article

  • Bugs in Excel's ActiveX combo boxes?

    - by k.robinson
    I have noticed that I get all sorts of annoying errors when: I have ActiveX comboboxes on a worksheet (not an excel form) The comboboxes have event code linked to them (eg, onchange events) I use their listfillrange or linkedcell properties (clearing these properties seems to alleviate a lot of problems) (Not sure if this is connected) but there is data validation on the targeted linkedcell. I program a fairly complex excel application that does a ton of event handling and uses a lot of controls. Over the months, I have been trying to deal with a variety of bugs dealing with those combo boxes. I can't recall all the details of each instance now, but these bugs tend to involve pointing the listfillrange and linkedcell properties at named ranges, and often have to do with the combo box events triggering at inappropriate times (such as when application.enableevents = false). These problems seemed to grow bigger in Excel 2007, so that I had to give up on these combo boxes entirely (I now use combo boxes contained in user forms, rather than directly on the sheets). Has anyone else seen similar problems? If so, was there a graceful solution? I have looked around with Google and so far haven't spotted anyone with similar issues. Some of the symptoms I end up seeing are: Excel crashing when I start up (involves combobox_onchange, listfillrange-named range on another different sheet, and workbook_open interactions). (note, I also had some data validation on the the linked cells in case a user edited them directly.) Excel rendering bugs (usually when the combo box changes, some cells from another sheet get randomly drawn over the top of the current sheet) Sometimes it involves the screen flashing entirely to another sheet for a moment. Excel losing its mind (or rather, the call stack) (related to the first bullet point). Sometimes when a function modifies a property of the comboboxes, the combobox onchange event fires, but it never returns control to the function that caused the change in the first place. The combobox_onchange events are triggered even when application.enableevents = false. Events firing when they shouldn't (I posted another question on stack overflow related to this). At this point, I am fairly convinced that ActiveX comboboxes are evil incarnate and not worth the trouble. I have switched to including these comboboxes inside a userform module instead. I would rather inconvenience users with popup forms than random visual artifacts and crashing (with data loss).

    Read the article

  • Sensor Manager getOrientation, doesnt display text and only works in debug mode?

    - by Aidan
    Hi Guys, My code appears to crash. I'm trying to setup a SensorManager for getting the Orientation of the device. I also have a listener that should update when conditions change. But when I run this code it crashes... public void sensor(){ // Locate the SensorManager using Activity.getSystemService sm = (SensorManager) getSystemService(SENSOR_SERVICE); sm.getOrientation(mR, mOrientation); // Register your SensorListener sm.registerListener(sl, sm.getDefaultSensor(Sensor.TYPE_ORIENTATION),SensorManager.SENSOR_DELAY_NORMAL); } private final SensorEventListener sl = new SensorEventListener() { @Override public void onAccuracyChanged(Sensor sensor, int accuracy) { } @Override public void onSensorChanged(SensorEvent event) { if (event.sensor.getType()==Sensor.TYPE_ORIENTATION) { Global.Orientation = "Orientation is equal to: "+ SensorManager.getOrientation(mR, mOrientation); } } sm is defined as SensorManager sm; as a class wide variable also. I've got some other classes outputting the Orientation to a screen which I know works. The problem is somewhere in these methods. Am I doing it wrong or Is there a better way of doing this?

    Read the article

  • Is InstallShield 2010 really necessary to support Windows 7 (against using InstallShield 12)

    - by Steve
    I have received an e-mail from Flexera containing the following statement: "We understand that many of our customers still using InstallShield 12 maybe having difficulties sending installations to Windows 7 desktops, 64 Bit servers or R2 2008. Therefore we would like to provide this opportunity for you to erase these potential / current problems and upgrade despite InstallShield 12 being End of Life’d some time ago." We have not had any problems ourselves. I am interested to hear if anyone else has had any issues or whether this is a scare to get us to upgrade unnecessarily. I did examine in some detail the differences at the original end-of-life notice but could not see any issues that affected our installations. Interestingly they call this an 'amnesty' - almost implying we are criminals for not upgrading (cf OED).

    Read the article

  • webrat, rspec, nokogiri segfault

    - by adamaig
    I'm getting a segfault in nokogiri (1.4.1) run (under cucumber 0.6.1/webrat 0.7.0/rspec 1.3.x) response.should have_selector("div", :class => "fieldWithErrors") and the div in the page is actually <div class="fieldWithErrors validation_error"> stuff </div> Everything runs fine if I just test nokogiri against a test document >> require 'nokogiri' >> doc = Nokogiri::HTML.parse("<div class='a b'>love to have problems</div>") => ... >> doc.css(".a") => [#<Nokogiri::XML::Element:0x3d62ac name="div" attributes=[#<Nokogiri::XML::Attr:0x3d6258 name="class" value="a b">] children=[#<Nokogiri::XML::Text:0x3d5e68 "love to have problems">]>] So I want to know how to setup a minimal webrat test of an html fragment document to help file a bug.

    Read the article

  • How to get Java Decompiler / JD / JD-Eclipse running in Eclipse Helios

    - by Universalspezialist
    Java Decompiler (JD) is generally recommended as a good, well, Java Decompiler. JD-Eclipse is the Eclipse plugin for JD. I had problems on several different machines to get the plugin running. Whenever I tried to open a .class file, the standard "Source not found" editor would show, displaying lowlevel bytecode disassembly, not the Java source output you'd expect from a decompiler. Installation docs in http://java.decompiler.free.fr/?q=jdeclipse are not bad but quite vague when it comes to troubleshooting. Opening this question to collect additional information: What problems did you encounter before JD was running in Eclipse Helios? What was the solution?

    Read the article

  • Debugging FLEX/AS3 memory leaks

    - by Scott Evernden
    I have a pretty big Flex & Papervision3D application that creates and destroys objects continually. It also loads and unloads SWF resource files too. While it's running the SWF slowly consumes memory til about 2GB when it croaks the player. Obviously I am pretty sure I let go of reference to instances I no longer want with expectation the GC will do its job. But I am having a heck of a time figuring out where the problem lies. I've tried using the profiler and its options for capturing memory snapshots, etc - but my problem remains evasive. I think there are known problems using debug Flash player also? But I get no joy using the release version either. How do you go about tracking down memory leak problems using FLEX/AS3 ? What are some strategies, tricks, or tools you have used to locate consumption

    Read the article

  • Method to capture a screenshot of user's browser to aid in bug reporting

    - by Buzz
    I'm looking for a way to make it easy for technically unsophisticated users to submit screenshots of their browser to me, to aid in debugging web application problems. There will be a button on all pages inside a web application they can use to report problems, which I would like to submit a screenshot (among other things). http://www.snapabug.com/ is very close to what I want, but I need to be able to customize a few things that service won't let me. Production environment is LAMP. I expect there must be something Flash-based that can do this, but I've not been able to find something. Thanks!

    Read the article

  • JQuery tablesorter problem

    - by Don
    Hi, I'm having a couple of problems with the JQuery tablesorter plugin. If you click on a column header, it should sort the data by this column, but there are a couple of problems: The rows are not properly sorted (1, 1, 2183, 236) The total row is included in the sort Regarding (2), I can't easily move the total row to a table footer, because the HTML is generated by the displaytag tag library over which I have limited control. Regarding (1), I don't understand why the sort doesn't work as I've used exactly the same JavaScript shown in the simplest example in the tablesorter tutorials. In fact, there's only a single line of JS code, which is: <body onload="jQuery('#communityStats').tablesorter();"> Thanks in advance, Don

    Read the article

  • SIMPLE OpenSSL RSA Encryption in C/C++ is causing me headaches

    - by Josh
    Hey guys, I'm having some trouble figuring out how to do this. Basically I just want a client and server to be able to send each other encrypted messages. This is going to be incredibly insecure because I'm trying to figure this all out so I might as well start at the ground floor. So far I've got all the keys working but encryption/decryption is giving me hell. I'll start by saying I am using C++ but most of these functions require C strings so whatever I'm doing may be causing problems. Note that on the client side I receive the following error in regards to decryption. error:04065072:rsa routines:RSA_EAY_PRIVATE_DECRYPT:padding check failed I don't really understand how padding works so I don't know how to fix it. Anywho here are the relevant variables on each side followed by the code. Client: RSA *myKey; // Loaded with private key // The below will hold the decrypted message unsigned char* decrypted = (unsigned char*) malloc(RSA_size(myKey)); /* The below holds the encrypted string received over the network. Originally held in a C-string but C strings never work for me and scare me so I put it in a C++ string */ string encrypted; // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_private_decrypt(sizeof(encrypted.c_str()), reinterpret_cast<const unsigned char*>(encrypted.c_str()), decrypted, myKey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Private decryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(decrypted); exit(1); } Server: RSA *pkey; // Holds the client's public key string key; // Holds a session key I want to encrypt and send //The below will hold the encrypted message unsigned char *encrypted = (unsigned char*)malloc(RSA_size(pkey)); // The reinterpret_cast line was to get rid of an error message. // Maybe the cause of one of my problems? if(RSA_public_encrypt(sizeof(key.c_str()), reinterpret_cast<const unsigned char*>(key.c_str()), encrypted, pkey, RSA_PKCS1_OAEP_PADDING)==-1) { cout << "Public encryption failed" << endl; ERR_error_string(ERR_peek_last_error(), errBuf); printf("Error: %s\n", errBuf); free(encrypted); exit(1); } Let me once again state, in case I didn't before, that I know my code sucks but I'm just trying to establish a framework for understanding this. I'm sorry if this offends you veteran coders. Thanks in advance for any help you guys can provide!

    Read the article

< Previous Page | 120 121 122 123 124 125 126 127 128 129 130 131  | Next Page >