Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 131/883 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • Structuring Access Control In Hierarchical Object Graph

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. Thoughts I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Why not write all tests at once when doing TDD?

    - by RichK
    The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go. The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, so what's the harm? Sometimes it's easier to write all the tests first as a form of 'brain dump' to quickly write down all the expected behavior in one go.

    Read the article

  • I don't understand how TDD helps me get a good design if I need a design to start testing it

    - by Michael Stum
    I'm trying to wrap my head around TDD, specifically the development part. I've looked at some books, but the ones I found mainly tackle the testing part - the History of NUnit, why testing is good, Red/Green/Refactor and how to create a String Calculator. Good stuff, but that's "just" Unit Testing, not TDD. Specifically, I don't understand how TDD helps me get a good design if I need a Design to start testing it. To illustrate, imagine these 3 requirements: A catalog needs to have a list of products The catalog should remember which products a user viewed Users should be able to search for a product At this points, many books pull a magic rabbit out of a hat and just dive into "Testing the ProductService", but they don't explain how they came to the conclusion that there is a ProductService in the first place. That is the "Development" part in TDD that I'm trying to understand. There needs to be an existing design, but stuff outside of entity-services (that is: There is a Product, so there should be a ProductService) is nowhere to be found (e.g., the second requirement requires me to have some concept of a User, but where would I put the functionality to remind? And is Search a feature of the ProductService or a separate SearchService? How would I know which I should choose?) According to SOLID, I would need a UserService, but if I design a system without TDD, I might end up with a whole bunch of Single-Method Services. Isn't TDD intended to make me discover my design in the first place? I'm a .net developer, but Java resources would also work. I feel that there doesn't seem to be a real sample application or book that deals with a real line of business application. Can someone provide a clear example that illustrates the process of creating a design using TDD?

    Read the article

  • What is the average page size for single page application (SPA)? [on hold]

    - by Emmanuel Istace
    I'm developing a single page application with a lot of css & javascript. For now the page is 1.3Mo composed by 5 section. Here are the rounded stats : Document : 10kb Style : 60kb Images : 450 kb (already compressed, include a big gallery thumbnails) Javascript : 700kb - 600kb of "framework" (jquery, jquery-ui, boostrap, modernizer, waypoint, ...) and 100kb of custom js. Fonts : 125kb And the site is not finished yet. (Will include gmap api, and some others...) My questions are : Do you have any statistics about the average weight of an SPA? As this is the whole website, do you think it's acceptable? Is lazy load (for images) a solution? What will be impact for SEO ? Is the "200kb rule" of google still relevant? Do you know great tools to detect which javascript code is not used during the the exection of a page and then the availability to optimize these 700kb of framework js stuffs? Can a caching strategy be an answer?

    Read the article

  • How to test high load on a website? [closed]

    - by rFactor
    Possible Duplicate: How do you load test your application? Hi, I am nearing a point of finalizing a website and it will soon be released. We have bought some traffic and advertisement packages and the nature of the site makes it heavier than typical static-like websites. I am looking for hear good ways to test how well the site performs under heavy load. I already know ab. Got any other tips to spare?

    Read the article

  • Database in the cloud?

    - by Jlouro
    Some of my recent clients are asking for remote connections to the office server, for standalone work, etc, in winForm applications. Since the concept of the web is remote connection to a server both of data and resources, it should be possible to place both of this in cloud and have the winForm apps connect to it as if web Apps. As any one tested this, is working like this? Is it fast enough? Is it secure? What is the best cloud host for this type of work ?

    Read the article

  • How can I compute the Big-O notation for a given piece of code?

    - by TheNew Rob Mullins
    So I just took a data structure midterm today and I was asked to determine the run time, in Big O notation, of the following nested loop: for (int i = 0; i < n-1; i++) { for(int j = 0; j < i; j++2) { //1 Statement } } I'm having trouble understanding the formula behind determining the run time. I thought that since the inner loop has 1 statement, and using the series equation of: (n * (n - 1)) / 2, I figured it to be: 1n * (n-1) / 2. Thus equaling (n^2 - 1) / 2. And so I generalized the runtime to be O(n^2 / 2). I'm not sure this is right though haha, was I supposed to divide my answer again by 2 since j is being upped in intervals of 2? Or is my answer completely off?

    Read the article

  • Bug severity classification issues

    - by KyleMinn
    In a book I have, there is a following classification of defect: Critical : A defect receives a “critical” severity level if one or more critical system functionalities are impaired by a defect with is impaired and there is no workaround. High: A defect receives a “high” severity level if some fundamental system functionalities are impaired but a workaround exists. Medium: A defect receives a “medium” severity level if no critical functionality is impaired and a workaround exists for the defect. Low: A defect receives a “low” severity level if the problem involves a cosmetic feature of the system. To be honest, I do not get it.. For example point 2. What if fundamental but not critical feature is impaired and there is NOT a workaround. The same for point 3: what if no critical functionality is affected but there is no workaround? E.g. optional field in the registration form does not work. No workaround but barely an issue.

    Read the article

  • How to install correctly another Linux flavour (in my case PCLinuxOS) together with installed Ubuntu 10.10 ?

    - by Vincenzo
    Hello everybody and Prosperous and Productive Year 2011 !!! I have Ubuntu 10.10 (32bit) installed on my laptop. I would like to install PCLinuxOS (KDE or LXDE version, I don't know yet) on the same computer across with Ubuntu 10.10. I would like to test 'in real conditions' a new PCLinuxOS as well as to resolve my question regarding Audio CD playback issue (mounting DBus timeout error). I would be grateful if somebody can advise me how to perform the installation of another Linux flavour without breakdown :) of existing Ubuntu system ? Thank you in advance for advices and recommendations. Here is my current partitioning:

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • GZipped Images - Is It Worth?

    - by charlie
    Most image formats are already compressed. But in fact, if I take an image and compress it [gzipping it], and then I compare the compressed one to the uncompressed one, there is a difference in size, even though not such a dramatic difference. The question is: is it worth gzipping images? the content size flushed down to the client's browser will be smaller, but there will be some client overhead when de-gzipping it. Please advise.

    Read the article

  • Is doing AB Tests using site redirection a bad practice?

    - by user40358
    I'm developing hotels websites here in Brazil. When the site is done, we do an AB test with the old version to measure conversion and show to the hotel owner how good our site is. Due to the fact that I cannot put the old site inside the new one as a subresource (newone.com/old), currently I'm doing those AB test as follows: 1) I create 2 Google Analytics accounts, one for each site (old and new); 2) I put the GA tags in the old website pages (changing its possibly existent GA ID to the just created one); 3) I put an Javascript code that redirects the user to the old website (in a different URL and different domain) with 50% of probability. So I compare all the metrics, events and goals between those two GA accounts. How bad is it? How Google can interpretate the fact of being, sometimes redirected, sometimes don't? The experiment usually runs for 2 weeks. Is there any other alternative for doing this in a better way?

    Read the article

  • How will I know when my company is ready to receive an investment? [migrated]

    - by gunshor
    How will I know when my company is ready to receive an investment? I am starting a company and have bootstrapped it so far. I have produced four versions of the demo. The first fully-working version is underway. Getting this to a beta phase product will require capital, which requires an investment, which requires an investor, which requires I stop working on the product and go out and talk to people about it. The last time I raised money from investors, it took a while but I was successful. I don't want it to take a while. I want it to be brain dead simple for an investor to understand the value so that I can optimize the time I spend with the product. Is my logic flawed? What is the best way to approach raising money, while limiting both my time and risk? Thanks.

    Read the article

  • Check for Instant File Initialization

    - by TiborKaraszi
    Instant File initialization, IFI, is generally a good thing to have. Check out this earlier blog post of mine f you don't know what IFI is and why it is a good thing: blog . The purpose of this blog post is to provide a simple script you can use to check if you have IFI turned on. Note that the script below uses undocumented commands, and might take a while if you have a large errorlog file... USE MASTER ; SET NOCOUNT ON -- *** WARNING: Undocumented commands used in this script !!! *** -- --Exit...(read more)

    Read the article

  • How selective do we need to be for an index to be used?

    - by TiborKaraszi
    You know the answer already: It depends. But I often see some percentage value quoted and the point of this post is to show that there is no such percentage value. To get the most out of this blog post, you should understand the basic structure for an index, i.e. how the b+ tree look like. You should also understand the difference between a clustered and a non-clustered index. In essence, you should be able to visualize these structures and searches through them as you read the text. If you find...(read more)

    Read the article

  • FxCop / Code Analysis with VS2010 Ultimate

    - by Cuartico
    I've getting some information about this, but I still can find a proper answer, I was asked recently in my company for this : "run a fxcop analysis on that code and tell me the results". Ok, I have VS2010 Ultimate which has code analysis, but before making any comment, I browse it on the internet cause I want to implement the best choice... So, let's say I'm gonna use the same rules on both analyzers: Should I recommend using one above the other? Should I say "hey, thats kinda old, let's use code analysis!" Should I get the same results on different computers? (for what I undersand, fxcop gives you some "points" and for what I've read, sometimes it gives you diff points on diff computers, I don't know about this with code analysis Thanks, any help would be appreciated

    Read the article

  • What does "kTriangles/s" mean in hardware graphics benchmark reports?

    - by swquinn
    I've looked around and found several sites offering benchmarking statistics for mobile platforms and I've been seeing the unit of measure as "kTriangles/s". Originally I misread this, missing the 'k'; does this translate to "thousand(s) of triangles/s", e.g.: 8902 kTriangles/s = 8,902,000 triangles/s (I'm pretty sure that my interpretation is correct, but I hope someone can confirm this for me) Thanks!

    Read the article

  • HTML5 - Does it have the power to handle a large 2D game with a huge world?

    - by user15858
    I have been using XNA game studio, but due to private reasons (as well as the ability to publish anywhere & my heavy interest in isogenic engine), I would like to switch to HTML5. However, I have very high 2D graphic demands for my game. The game itself will have a HDD size of anywhere between 6GB (min) to 12GB (max) which would be a full game deployed offline. The size of the images aren't significantly large, so streaming would be entirely possible if only those assets required were streamed as needed. The game has a massive file size because of the sheer amount of content. For some images or spritesheets, they would be quite massive. (ex. a very large Dragon, which if animated in a spritesheet would be split into two 4096x4096 sheets or one 8192x8192 sheet). Most assets would be very small, and about 7MB for a full character with 15 animations in every direction (all animations not required immediately) so in the size of a few hundred KB to download before the game loads. My question, however, is if the graphical power of HTML5 is enough to animate several characters on screen at once, when it flips through frames quite rapidly. All my sprites have about 25 frames per animation, 5 directions (a spritesheet for each direction & animation), and run at 30fps. Upon changing direction, animation, or a new character entering, spritesheets would change and be constantly loading/unloading. If I pack all directions in a single sheet, it would be about 2048x2048 per sheet. Most frameworks have no problem with this, but I am afraid from what I read that HTML5's graphical capabilities will limit me. Since it takes significant time simply to animate characters in any language, I'd like a quick answer.

    Read the article

  • Reliable Storage Systems for SQL Server

    By validating the IO path before commissioning the production database system, and performing ongoing validation through page checksums and DBCC checks, you can hopefully avoid data corruption altogether, or at least nip it in the bud. If corruption occurs, then you have to take the right decisions fast to deal with it. Rod Colledge explains how a pessimistic mindset can be an advantage

    Read the article

  • Designing Efficient SQL: A Visual Approach

    Sometimes, it is a great idea to push away the keyboard when tackling the problems of an ill-performing, complex, query, and take up pencil and paper instead. By drawing a diagram to show of all the tables involved, the joins, the volume of data involved, and the indexes, you'll see more easily the relative efficiency of the possible paths that your query could take through the tables.

    Read the article

  • How do people maintain their test suite?

    - by Ida
    In particular, I'm curious about the following aspects: How do you know that your test cases are wrong (or out-of-date) and needed to be repaired (or discarded)? I mean, even if a test case became invalid, it might still pass and remain silent, which could let you falsely believe that your software works okay. So how do you realize such problems of your test suite? How do you know that your test suite is no longer sufficient and that new test cases should be added? I guess this has something to do with the requirement changes, but is there any systematic approach to check the adequacy of test suite?

    Read the article

  • Mimicking a bluetooth disconnection

    - by user2529672
    I've written a program to control a bluetooth device. I'm trying to test cases when the bluetooth disconnects, i.e. if its out of range. Physically taking the device out of range is one possibility, but its quite cumbersome and I have to go outside my office to achieve this. What can I do to trigger a disconnection? Is there, for example, an interferer I can setup, say with an Android phone, that would make the connection drop? Or limit the Bluetooth transmit power? Any other possibilities?

    Read the article

  • New DMV–yes… no… that’s complicated

    - by Michael Zilberstein
    Remember the excitement about new sys.dm_exec_query_profiles DMV? It promised to be a gamechanger, providing query visibility at a runtime and easily extractable information about heavy iterators in execution plan. So it has been announced but missing. Now in CTP2 it is finally here. So, singing one of my favorite Queen songs “… It finally happened - I’m slightly mad…” , I tried to observe query execution data at a runtime. And… nothing. Query is running, DMV is empty. That’s really disappointing...(read more)

    Read the article

  • Is there a better way to organize my module tests that avoids an explosion of new source files?

    - by luser droog
    I've got a neat (so I thought) way of having each of my modules produce a unit-test executable if compiled with the -DTESTMODULE flag. This flag guards a main() function that can access all static data and functions in the module, without #including a C file. From the README: -- Modules -- The various modules were written and tested separately before being coupled together to achieve the necessary basic functionality. Each module retains its unit-test, its main() function, guarded by #ifdef TESTMODULE. `make test` will compile and execute all the unit tests, producing copious output, but importantly exitting with an appropriate success or failure code, so the `make test` command will fail if any of the tests fail. Module TOC __________ test obj src header structures CONSTANTS ---- --- --- --- -------------------- m m.o m.c m.h mfile mtab TABSZ s s.o s.c s.h stack STACKSEGSZ v v.o v.c v.h saverec_ f.o f.c f.h file ob ob.o ob.c ob.h object ar ar.o ar.c ar.h array st st.o st.c st.h string di di.o di.c di.h dichead dictionary nm nm.o nm.c nm.h name gc gc.o gc.c gc.h garbage collector itp itp.c itp.h context osunix.o osunix.c osunix.h unix-dependent functions It's compile by a tricky bit of makefile, m:m.c ob.h ob.o err.o $(CORE) itp.o $(OP) cc $(CFLAGS) -DTESTMODULE $(LDLIBS) -o $@ $< err.o ob.o s.o ar.o st.o v.o di.o gc.o nm.o itp.o $(OP) f.o where the module is compiled with its own C file plus every other object file except itself. But it's creating difficulties for the kindly programmer who offered to write the Autotools files for me. So the obvious way to make it "less weird" would be to bust-out all the main functions into separate source files. But, but ... Do I gotta?

    Read the article

  • Updated sp_indexinfo

    - by TiborKaraszi
    It was time to give sp_indexinfo some love. The procedure is meant to be the "ultimate" index information procedure, providing lots of information about all indexes in a database or all indexes for a certain table. Here is what I did in this update: Changed the second query that retrieves missing index information so it generates the index name (based on schema name, table name and column named - limited to 128 characters). Re-arranged and shortened column names to make output more compact and more...(read more)

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >