Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 347/883 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Are there solutions for streamlining the update of legacy code in multiple places?

    - by ccomet
    I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types. Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all. It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future. Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded. Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.

    Read the article

  • HTML converted to jQuery collection not searchable with selectors?

    - by jimp
    I am trying to dynamically load a page using $.get(), parse the return with var $content = $(data), and ultimately use selectors to find only certain parts of the document. Only I cannot figure out why the jQuery collection returned from $(data) does not find some very basic selectors. I set up a jsFiddle to illustrate the problem using a very small string of HTML. <html> <head> <title>See Our Events</title> </head> <body><div id="content">testing</div></body> </html> I want to find the <title> node. var html = "<html>\n"+ "<head>\n"+ " <title>See Our Events</title>\n"+ "</head>\n"+ "<body><div id=\"content\">testing</div></body>\n"+ "</html>"; var $content = $(html); console.log($content.find('title').length); // Logs 0. Why? If I wrap a <div> around the HTML, then the selector works. (But if you look at the jsFiddle, other variations of the selector still do not work!) var html = "<div><html>\n"+ "<head>\n"+ " <title>See Our Events</title>\n"+ "</head>\n"+ "<body><div id=\"content\">testing</div></body>\n"+ "</html></div>"; var $content = $(html); console.log($content.find('title').length); // Logs 1. Please look at the jsFiddle, too. It contains more examples than my code here to keep the post easier to read. Why does my otherwise very basic selector not return the title node?

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Interchange structured data between Haskell and C

    - by Eonil
    First, I'm a Haskell beginner. I'm planning integrating Haskell into C for realtime game. Haskell does logic, C does rendering. To do this, I have to pass huge complexly structured data (game state) from/to each other for each tick (at least 30 times per second). So the passing data should be lightweight. This state data may laid on sequential space on memory. Both of Haskell and C parts should access every area of the states freely. In best case, the cost of passing data can be copying a pointer to a memory. In worst case, copying whole data with conversion. I'm reading Haskell's FFI(http://www.haskell.org/haskellwiki/FFICookBook#Working_with_structs) The Haskell code look specifying memory layout explicitly. I have a few questions. Can Haskell specify memory layout explicitly? (to be matched exactly with C struct) Is this real memory layout? Or any kind of conversion required? (performance penalty) If Q#2 is true, Any performance penalty when the memory layout specified explicitly? What's the syntax #{alignment foo}? Where can I find the document about this? If I want to pass huge data with best performance, how should I do that? *PS Explicit memory layout feature which I said is just C#'s [StructLayout] attribute. Which is specifying in-memory position and size explicitly. http://www.developerfusion.com/article/84519/mastering-structs-in-c/ I'm not sure Haskell has matching linguistic construct matching with fields of C struct.

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • Plist data OK in iPhone simulator but disappears after installation onto device

    - by user183804
    I am testing an application on a first-generation iPod running OS 3.1.3 with a tableview populated from a plist. The application works well on the simulator. Initially after installation onto the iPod for testing, the tableview was blank (scrollable lines present, but empty rows). After searching this site, I found one issue to be a case-sensitive problem in the name of the plist file ("Data.plist in resources; "data.plist" in code). When the problem persisted, I again found on this site the advice to "Clean all Targets" prior to installation. I now have found that after performing a "Clean all Targets" operation twice, quitting XCode in between operations, the problem resolves, and the tableview appears properly populated after installation. I am wondering whether this issue signifies a corrupted plist file, prior to making the application available on the App store. Re-creating the plist file from scratch and then testing it likely would take several hours of work, so I do not want to do so unless absolutely necessary. Thanks in advance for any help.

    Read the article

  • What are the arguments against the inclusion of server side scripting in JavaScript code blocks?

    - by James Wiseman
    I've been arguing for some time against embedding server-side tags in JavaScript code, but was put on the spot today by a developer who seemed unconvinced The code in question was a legacy ASP application, although this is largely unimportant as it could equally apply to ASP.NET or PHP (for example). The example in question revolved around the use of a constant that they had defined in ServerSide code. 'VB Const MY_CONST: MY_CONST = 1 If sMyVbVar = MY_CONST Then 'Do Something End If //JavaScript if (sMyJsVar === "<%= MY_CONST%>"){ //DoSomething } My standard arguments against this are: Script injection: The server-side tag could include code that can break the JavaScript code Unit testing. Harder to isolate units of code for testing Code Separation : We should keep web page technologies apart as much as possible. The reason for doing this was so that the developer did not have to define the constant in two places. They reasoned that as it was a value that they controlled, that it wasn't subject to script injection. This reduced my justification for (1) to "We're trying to keep the standards simple, and defining exception cases would confuse people" The unit testing and code separation arguments did not hold water either, as the page itself was a horrible amalgam of HTML, JavaScript, ASP.NET, CSS, XML....you name it, it was there. No code that was every going to be included in this page could possibly be unit tested. So I found myself feeling like a bit of a pedant insisting that the code was changed, given the circumstances. Are there any further arguments that might support my reasoning, or am I, in fact being a bit pedantic in this insistence?

    Read the article

  • Jquery Apache - IE problem

    - by Soldierflup
    I'm having a button tag on my page with a value. <button class='btn' value='value'>show value</button> I have this jquery code : $('.btn').click(function() { var w = 'value = '+$(this).val()+' / text = '+$(this).html(); alert(w); }); In FF, no problem the result is ok (display: value = value / text = show value). The problem comes with IE8 which displays a different results from my testing server and the production server. The testing server is my local machine with a standard XAMPP installation. The productionserver is a server based on linux with apache, php and mysql. Result from the testing server is ok (display like FF), the result from the production server is not good (displaying : value = show value / text : show value). Anyone an idea if it is apache that causes the error ? I know there are some issues with the use of val() because IE is considering it as an attribute and not a value. The problem is that changing the jQuery from val() to attr('value') is quit a lot of work (this implementation is already on a lot of pages) and I think it could be much easier to change something on the webserver.

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • Using child visitor in C#

    - by Thomas Matthews
    I am setting up a testing component and trying to keep it generic. I want to use a generic Visitor class, but not sure about using descendant classes. Example: public interface Interface_Test_Case { void execute(); void accept(Interface_Test_Visitor v); } public interface Interface_Test_Visitor { void visit(Interface_Test_Case tc); } public interface Interface_Read_Test_Case : Interface_Test_Case { uint read_value(); } public class USB_Read_Test : Interface_Read_Test_Case { void execute() { Console.WriteLine("Executing USB Read Test Case."); } void accept(Interface_Test_Visitor v) { Console.WriteLine("Accepting visitor."); } uint read_value() { Console.WriteLine("Reading value from USB"); return 0; } } public class USB_Read_Visitor : Interface_Test_Visitor { void visit(Interface_Test_Case tc) { Console.WriteLine("Not supported Test Case."); } void visit(Interface_Read_Test_Case rtc) { Console.WriteLine("Not supported Read Test Case."); } void visit(USB_Read_Test urt) { Console.WriteLine("Yay, visiting USB Read Test case."); } } // Code fragment USB_Read_Test test_case; USB_Read_Visitor visitor; test_case.accept(visitor); What are the rules the C# compiler uses to determine which of the methods in USB_Read_Visitor will be executed by the code fragment? I'm trying to factor out dependencies of my testing component. Unfortunately, my current Visitor class contains visit methods for classes not related to the testing component. Am I trying to achieve the impossible?

    Read the article

  • Reading part of system file contents with PHP

    - by zx
    Hi, I have a config file, in etc/ called 1.conf Here is the contents.. [2-main] exten => s,1,Macro(speech,"hi {$VAR1} how is your day going?") exten => s,4,Macro(dial,2,555555555) exten => s,2,Macro(speech,"lkqejqe;j") exten => s,3,Macro(speech,"hi there") exten => s,5,Macro(speech,"this is a test ") exten => s,6,Macro(speech,"testing 2") exten => s,7,Macro(speech,"this is a test") exten => 7,1,Goto(2-tester2,s,1) exten => 1,1,Goto(2-aa,s,1) [2-tester] [2-aa] exten => 1,1,Goto(2-main,s,1) How can I read the content in between speech for example.. exten => s,6,Macro(speech,"testing 2") Just get "testing 2" from that. Thank you in advance!

    Read the article

  • I need a fast runtime expression parser

    - by Chris Lively
    I need to locate a fast, lightweight expression parser. Ideally I want to pass it a list of name/value pairs (e.g. variables) and a string containing the expression to evaluate. All I need back from it is a true/false value. The types of expressions should be along the lines of: varA == "xyz" and varB==123 Basically, just a simple logic engine whose expression is provided at runtime. UPDATE At minimum it needs to support ==, !=, , =, <, <= Regarding speed, I expect roughly 5 expressions to be executed per request. We'll see somewhere in the vicinity of 100/requests a second. Our current pages tend to execute in under 50ms. Usually there will only be 2 or 3 variables involved in any expression. However, I'll need to load approximately 30 into the parser prior to execution. UPDATE 2012/11/5 Update about performance. We implemented nCalc nearly 2 years ago. Since then we've expanded it's use such that we average 40+ expressions covering 300+ variables on post backs. There are now thousands of post backs occurring per second with absolutely zero performance degradation. We've also extended it to include a handful of additional functions, again with no performance loss. In short, nCalc met all of our needs and exceeded our expectations.

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • PHP hashing function not working properly

    - by Jordan Foreman
    So I read a quick PHP login system securing article, and was trying to sort of duplicate their hashing method, and during testing, am not getting the proper output. Here is my code: function decryptPassword($pw, $salt){ $hash = hash('sha256', $salt . hash('sha256', $pw)); return $hash; } function encryptPassword($pw){ $hash = hash('sha256', $pw); $salt = substr(md5(uniqid(rand(), true)), 0, 3); $hash = hash('sha265', $salt . $hash); return array( 'salt' => $salt, 'hash' => $hash ); } And here is my testing code: $pw = $_GET['pw']; $enc = encryptPassword($pw); $hash = $enc['hash']; $salt = $enc['salt']; echo 'Pass: ' . $pw . '<br />'; echo 'Hash: ' . $hash . '<br />'; echo 'Salt: ' . $salt . '<br />'; echo 'Decrypt: ' . decryptPassword($hash, $salt); Now, the output of this should be pretty obvious, but unfortunately, the $hash variable always comes out empty! I'm trying to figure out what the problem could be, and my only guess would be the second $hash assignment line in the encryptPassword(..) function. After a little testing, I've determined that the first assignment works smoothly, but the second does not. Any suggestions? Thanks SO!

    Read the article

  • lapply slower than for-loop when used for a BiomaRt query. Is that expected?

    - by ptocquin
    I would like to query a database using BiomaRt package. I have loci and want to retrieve some related information, let say description. I first try to use lapply but was surprise by the time needed for the task to be performed. I thus tried a more basic for-loop and get a faster result. Is that expected or is something wrong with my code or with my understanding of apply ? I read other posts dealing with *apply vs for-loop performance (Here, for example) and I was aware that improved performance should not be expected but I don't understand why performance here is actually lower. Here is a reproducible example. 1) Loading the library and selecting the database : library("biomaRt") athaliana <- useMart("plants_mart_14") athaliana <- useDataset("athaliana_eg_gene",mart=athaliana) 2) Querying the database : loci <- c("at1g01300", "at1g01800", "at1g01900", "at1g02335", "at1g02790", "at1g03220", "at1g03230", "at1g04040", "at1g04110", "at1g05240" ) I create a function for the use in lapply : foo <- function(loci) { getBM("description","tair_locus",loci,athaliana) } When I use this function on the first element : > system.time(foo(cwp_loci[1])) utilisateur système écoulé 0.020 0.004 1.599 When I use lapply to retrieve the data for all values : > system.time(lapply(loci, foo)) utilisateur système écoulé 0.220 0.000 16.376 I then created a new function, adding a for-loop : foo2 <- function(loci) { for (i in loci) { getBM("description","tair_locus",loci[i],athaliana) } } Here is the result : > system.time(foo2(loci)) utilisateur système écoulé 0.204 0.004 10.919 Of course, this will be applied to a big list of loci, so the best performing option is needed. I thank you for assistance. EDIT Following recommendation of @MartinMorgan Simply passing the vector loci to getBM greatly improves the query efficiency. Simpler is better. > system.time(lapply(loci, foo)) utilisateur système écoulé 0.236 0.024 110.512 > system.time(foo2(loci)) utilisateur système écoulé 0.208 0.040 116.099 > system.time(foo(loci)) utilisateur système écoulé 0.028 0.000 6.193

    Read the article

  • VB6 ActiveX exe - what is the proper registration sequence?

    - by Timbuck
    I have recently updated a Visual Basic 6 application that is an ActiveX exe, running on Windows XP. I have a couple of testers for this application who have received a copy of the exe and are attempting to run it. However, they are getting an error message "Unexpected error;quitting" when trying to do so. A key difference between their testing and my testing is that on the machines I tested on, I have admin rights and was able to register the application using the appname.exe /regserver command line. Reading the details at MS Support about file registration appears unclear: Visual Basic ActiveX EXE files register themselves the first time you run the EXE. However, you cannot use the EXE as a COM server until it is registered. So does this mean that after the first time the users run the exe that the application should be correctly registered, and the error I am receiving is sign of something other than an incorrectly registered application? Or does this mean that the application will not work properly until such time as the file is explicitly registered using the appname.exe /regserver command line? nb - during a production distribution, the software would be sent out to client PCs using Systems Management Server, which isn't an option for this testing.

    Read the article

  • Should this work?

    - by Noah Roberts
    I am trying to specialize a metafunction upon a type that has a function pointer as one of its parameters. The code compiles just fine but it will simply not match the type. #include <iostream> #include <boost/mpl/bool.hpp> #include <boost/mpl/identity.hpp> template < typename CONT, typename NAME, typename TYPE, TYPE (CONT::*getter)() const, void (CONT::*setter)(TYPE const&) > struct metafield_fun {}; struct test_field {}; struct test { int testing() const { return 5; } void testing(int const&) {} }; template < typename T > struct field_writable : boost::mpl::identity<T> {}; template < typename CONT, typename NAME, typename TYPE, TYPE (CONT::*getter)() const > struct field_writable< metafield_fun<CONT,NAME,TYPE,getter,0> > : boost::mpl::false_ {}; typedef metafield_fun<test, test_field, int, &test::testing, 0> unwritable; int main() { std::cout << typeid(field_writable<unwritable>::type).name() << std::endl; std::cin.get(); } Output is always the type passed in, never bool_.

    Read the article

  • Phonegap: Will my mobile app 'feel' faster or slower once ported to phonegap?

    - by user15872
    So I'm designing everything in mobile Safari and I know that phonegap is essentially a stripped webview but... Question: Will my application will run better in phonegap? (revised below) a)I imagine my navigation and core app will load faster as the scripts and images are on the hard drive. Is this True? b)I assume since they've been working on it for 2 years now that they may have made some optimizations to make it quicker than just an average safari window. Is this true? (Assuming both html5/js/css code bases are pretty much the same and app is running on iOS.) Update: Sorry, I meant to compare apples to slightly different apples. Question 1 revised: Will my app see any performance benefits running with in a phonegap environment vs standard mobile safari? (compare mobile - to mobile) 1b) In what ways, other than loading time has phonegap optimized performance over standard mobile safari? Follow ups: 1) Are there any pitfalls, other than large libraries, that may cause phonegap to suffer a serious performance hit vs stand mobile safari? 2) Can I mix native and webview rendering? (i.e the top half of my app is rendered in with html/css/js and the bottom half native)

    Read the article

  • SQL Server Querying An XML Field

    - by Gavin Draper
    I have a table that contains some meta data in an XML field. For example <Meta> <From>[email protected]</From> <To> <Address>[email protected]</Address> <Address>[email protected]</Address> </To> <Subject>ESubject Goes Here</Subject> </Meta> I want to then be able to query this field to return the following results From To Subject [email protected] [email protected] Subject Goes Here [email protected] [email protected] Subject Goes Here I've written the following query SELECT MetaData.query('data(/Meta/From)') AS [From], MetaData.query('data(/Meta/To/Address)') AS [To], MetaData.query('data(/Meta/Subject)') AS [Subject] FROM Documents However this only returns one record for that XML field. It combines both the 2 addresses into one result. Is it possible for split these on to separate records? The result I'm getting is From To Subject [email protected] [email protected] [email protected] Subject Goes Here Thanks Gav

    Read the article

  • Is there any possibility of rendering differences betweeen firefox 3 and 3.5 and IE 7 and 8, even if

    - by metal-gear-solid
    I'm making a site for European client and he said Firefox 3 and IE 7 and 8 has more user than others browser for desktop in Europe http://gs.statcounter.com/#browser_version-eu-monthly-200812-201001-bar I've only IE 7 and Firefox 3.5.7 installed in my PC. Should I download portable Firefox 3.0 and test in it too even if I'm not using any new css property/selector which only has support in Firefox 3.5 or testing in 3.5.7 would be enough? And for IE testing in IE 7 would be enough or should i check my site in IE8 (downloading VPC image of IE8 and testing in VM) even if I'm not using any new css property/selector which only has support in IE8? Or is it necessary to use <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> in <head> ? But what will happen when user will switch compatibility mode to IE 8 default rendering mode? Can we make site compatible in IE 7 and 8 both without using <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" />? If yes, then what special we need to do.care/consider in css to make site identical in both?

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • What are good examples of perfectly acceptable approaches to development that are NOT test driven development (TDD)?

    - by markbruns
    The TDD cycle is test, code, refactor, (repeat) and then ship. TDD implies development that is driven by testing, specifically that means understanding requirements and then writing tests first before developing or writing code. My natural inclination is a philosophical bias in favor of TDD; I would like to be convinced that there are other approaches that now work well or even better than TDD so I have asked this question. What are examples of perfectly acceptable approaches that NOT test driven development? I can think of plenty approaches that are not TDD but could be a lot more trouble than what they are worth ... it's not moral judgement, it's just that they are cost more than they are worth ... the following are simply examples of things that might be ok as learning exercises, but approaches I'd find to be NOT acceptable in serious production and NOT TDD might include: Inspecting quality into your product -- Focusing efforts on developing a proficiency in testing/QA can be problematic, especially if you don't work on the requirements and development side first ... symptom of this include bug triaging where the developers have so many different bugs to deal with it, it is necessary to employ a form of triage -- each development cycle gets worse and worse, programmers work more and more hours, sleep less and less, struggle to keep going in death march until they are consumed. Superstition ... believing in things that you don't understand -- this would involve borrowing code that you believe has been proven or tested from somewhere, e.g. legacy code, a magic code starter wizard or an open source project, and you go forward hacking up a storm of modifications, sliding FaceBook Connect into your the user interface, inventing some new magic features on the fly (e.g. a mashup using the Twitter API, GoogleMaps API and maybe Zappos API), showing off your cool new "product" to a few people and then writing up a simple "specification" and list of "test cases" and turning that over to Mechanical Turk for testing.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >