Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 491/883 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Setting up web page width

    - by RPK
    I am new to web-design. I want to set the page-width so that it appears well in a 800x600 resolution screen. I normally use Tables but I read somewhere that excessive use of Tables slows the performance of the website. What other thing I can use and how to set the width?

    Read the article

  • What is the best python module skeleton code?

    - by user213060
    == Subjective Question Warning == Looking for well supported opinions or supporting evidence. Let us assume that skeleton code can be good. If you disagree with the very concept of module skeleton code then fine, but please refrain from repeating that opinion here. Many python IDE's will start you with a template like: print 'hello world' That's not enough... So here's my skeleton code to get this question started: My Module Skeleton, Short Version: #!/usr/bin/env python """ Module Docstring """ # ## Code goes here. # def test(): """Testing Docstring""" pass if __name__=='__main__': test() and, My Module Skeleton, Long Version: #!/usr/bin/env python # -*- coding: ascii -*- """ Module Docstring Docstrings: http://www.python.org/dev/peps/pep-0257/ """ __author__ = 'Joe Author ([email protected])' __copyright__ = 'Copyright (c) 2009-2010 Joe Author' __license__ = 'New-style BSD' __vcs_id__ = '$Id$' __version__ = '1.2.3' #Versioning: http://www.python.org/dev/peps/pep-0386/ # ## Code goes here. # def test(): """ Testing Docstring""" pass if __name__=='__main__': test() Notes: """ ===MODULE TYPE=== Since the vast majority of my modules are "library" types, I have constructed this example skeleton as such. For modules that act as the main entry for running the full application, you would make changes such as running a main() function instead of the test() function in __main__. ===VERSIONING=== The following practice, specified in PEP8, no longer makes sense: __version__ = '$Revision: 1.2.3 $' for two reasons: (1) Distributed version control systems make it neccessary to include more than just a revision number. E.g. author name and revision number. (2) It's a revision number not a version number. Instead, the __vcs_id__ variable is being adopted. This expands to, for example: __vcs_id__ = '$Id: example.py,v 1.1.1.1 2001/07/21 22:14:04 goodger Exp $' ===VCS DATE=== Likewise, the date variable has been removed: __date__ = '$Date: 2009/01/02 20:19:18 $' ===CHARACTER ENCODING=== If the coding is explicitly specified, then it should be set to the default setting of ascii. This can be modified if necessary (rarely in practice). Defaulting to utf-8 can cause anomalies with editors that have poor unicode support. """ There are a lot of PEPs that put forward coding style recommendations. Am I missing any important best practices? What is the best python module skeleton code? Update Show me any kind of "best" that you prefer. Tell us what metrics you used to qualify "best".

    Read the article

  • Rails 3 on dreamhost?

    - by p33t3r
    I'd like to deploy a small Rails 3 app on dreamhost (just testing purposes, nothing serious) and I am wondering if anyone did it already... please chose one of I did it and it's super easy, here's how: ... Though I didn't try it, it should be easy, here's how: ... It's quite complicated, but this should get you started: .... NO WAI!!!1!one!1 Set it up on slicehost or another non-shared hosting or you'll die a painful death trying to force it on DH Thoughts?

    Read the article

  • KPI definition asking for advice

    - by George2
    Hello everyone, I am looking for some reference/samples about how to define good KPI for education industry. I want to define KPI from school/department management perspective to measure various school performance, students/faculty/others. Any advice, referneces or documents are appreciated -- more appreciated if with the context of SQL Server 2005/2008. thanks in advance, George

    Read the article

  • PHP - CURL vs fopen vs fsocketopen ?

    - by TatMing
    I would write a WordPress plugin to parse all image source and check it's broken link or not. My idea is : Select all post&page's images by regex from MySQL Navigate the image url and get the response header (404 ,403 error etc) print a report Since i don't need actual to download the binary file, so in performance ,compare in CURL , fopen , fsocketopen Which one is worst to use? And one more question, which method can execute in multi-thread?

    Read the article

  • Getting a full list of the URLS in a rails application

    - by Laurie Young
    How do I get a a complete list of all the urls that my rails application could generate? I don't want the routes that I get get form rake routes, instead I want to get the actul URLs corrosponding to all the dynmically generated pages in my application... Is this even possible? (Background: I'm doing this because I want a complete list of URLs for some load testing I want to do, which has to cover the entire breadth of the application)

    Read the article

  • Get methods covered by a unit test

    - by Victor Hurdugaci
    Is is possible to do the following from a Visual Studio 2010 plugin? If yes, how? Run all unittests in solution (with code coverage enabled) Wait for all tests to complete For successfully completed tests: Determine which methods were called during each test (directly by the test or indirectly by the tested methods). What I actually don't know is how to interact with the testing framework...

    Read the article

  • Codeigniter Best Practices for Model functions

    - by user270797
    Say my application has a "Posts" model, and one of the function is add_post(), it might be something like: function add_post($data) { $this-db-insert('posts',$data); } Where $data is an array: $data = array ('datetime'='2010-10-10 01:11:11', 'title'='test','body'='testing'); Is this best practice? It means if you use that function you need to know the names of the database fields where as my understanding of OOP is that you shouldnt need to know how the method works etc etc

    Read the article

  • Using Robot Framework for ATDD

    - by ratkok
    I would like to hear other people's experience with using Robot Framework for automated acceptance testing. What are its major strengths and weaknesses as well as any comparison with other frameworks (mainly Fitness and Selenium)? The code that will be tested is real-time, legacy code, mainly in C++.

    Read the article

  • CDNs and domains

    - by Martind
    Hi all! A lot of big websites (facebook etc) are settings up CDN's for their content. Now I notice, that these CDN's are not always on the original domain. Example: Facebook pictures are on "photos-a.ak.fbcdn.net" Why is that? Is there a performance-gain in not having lots of subdomains on the "primary" domain (facebook.com)

    Read the article

  • What's the primary use of Windows Event Viewer?

    - by james.ingham
    Hi all, Just wondering what everybody's opinion is on the Windows Admin tool Event Viewer? I'm writing a WCF application at the moment and have started logging errors to the windows event viewer when I handle them. I then started thinking, should I be logging more than just errors, such as when a user has logged in or out or would you go further logging even more activity? Or is this a tool that's mainly used for testing without using the debugger? Any input appreciated:-)

    Read the article

  • Socket throughput on localhost?

    - by gct
    I've got an app that's using sockets to push data, and I'm currently testing it on my localhost (so that the sender and receiver are on the same computer). I'm seeing between 36 and 66MB/s of throughput, which seems somewhat slow to me. What are normal throughput ranges for binary data on a local socket connection?

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • Asynchronous I/O on Mac OS X

    - by stas
    Hi, Meaning the C10K problem, what is the best way to do asynch I/O on Mac OS X (assume to use on Mac and iPhone/iPad)? On Linux our choice is epoll, on Windows is I/O Completion Ports. Top priority is performance and scalability (thousands of connections). Thanks

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Why use NoSQL over Materialized Views?

    - by JustinT
    There has been a lot of talk recently about NoSQL. The #1 reason why I hear people use NoSQL is because they start to de-normalize their DBMS data so much so, to increase performance, that they end up with just one table with all of their data within that single table. With Materialized Views however, you can keep your data normalized, yet have it stored as a single table view for the same reasons why you'd use NoSQL. As such, why would someone use NoSQL over Materialized Views?

    Read the article

  • Oracle sample data problems

    - by Jay
    So, I have this java based data trasformation / masking tool, which I wanted to test out on Oracle 10g. The good part with Oracle 10g is that you get a load of sample schemas with half a million records in some. The schemas are : SH, OE, HR, IX and etc. So, I installed 10g, found out that the installation scripts are under ORACLE_HOME/demo/scripts. I customized these scripts a bit to run in batch mode. That solves one half of my requirement - to create source data for my testing my data transformation software. The second half of the requirement is that I create the same schemas under different names (TR_HR, TR_OE and so on...) without any data. These schemas would represent my target schemas. So, in short, my software would pick up data from a table in a schema and load it up in to the same table in a different schema. Now, I have two issues in creating my target schema and emptying it. I would like this in a batch job. But the oracle scripts you get, the sample schema names are not configurable. So, I tried creating a script, replacing OE with TR_OE, HR with TR_HR and so on. However, this approach is kind of irritating coz the sample schemas are kind of complicated in the way they are created; Oracle creates synonyms, views, materialized views, data types and lot of weird stuff. I would like the target schemas (TR_HR, TR_OE,...) to be empty. But some of the schemas have circular references, which would not allow me to delete data. The only work around seems to be removing certain foreign keys, deleting data and then adding the constraints back. Is there any easy way to all this, without all this fuss? I would need a complicated data set for my testing (complicated as in tables with triggers, multiple hierarchies.. for instance.. a child table that has children up to 5 levels, a parent table that refers to an IOT table and an IOT table that refers to a non-IOT table etc..). The sample schemas are just about perfect from a data set perspective. The only challenge I see is in automating this whole process of loading up the source schemas, and then creating the target schemas and emptying them. Appreciate your help and suggestions.

    Read the article

  • How to give CSS3 support to IE?

    - by metal-gear-solid
    I use IE7.js but it doesn't have CSS3 support. I use jQuery always in my projects. What is the best lightweight way to give all CSS3 selectors and properties support to IE 6,7,8? I'm not asking for HTML5 support only asking to give CSS3 support in as much as light on performance way.

    Read the article

  • Asp.net web site and web service hosting

    - by Razvi
    I have a web site and a web service and I would like to host them somewhere. I need to use it mostly for testing and so that a some friends who are developing some applications to use the service have access to it. Also the service is using a MS SQL 2008 database. Could you suggest any good free or cheap web hosting service for this. I would need about 1-2 months of hosting.

    Read the article

  • Mobile Safari 5mb HTML5 application cache limit?

    - by JFH
    It's becoming evident in my testing that there's a 5mb size limit on Mobile Safari's implementation of HTML5's application cache. Does anyone know how to circumvent or raise this? Is there some unexposed meta tag that I should know about? I have to cache some video content for an offline app and 5mb is not going to be enough.

    Read the article

  • Are Drupal theme settings cached?

    - by barraponto
    i want to change theme_username, a core theme function that outputs that dreadful "not verified" string on users who are not logged in (when they comment, for example). i want a checkbox in admin/build/themes/settings/MYTHEME to change that. but since that theme function gets called a lot, will it hurt the performance of any site using my theme or are theme settings cached?

    Read the article

  • Managing Java dependencies in a Grails application?

    - by Stefan Kendall
    I'm trying to adopt my development from Spring/Maven2/Tomcat -> Grails, and I'm wondering if there's an easy way to manage dependencies in grails separate from maven. Maven does a lot of the magic that grails is doing automatically (unit testing/building/etc.), so I wonder if there's a need for maven at all in grails projects. So, then, how do Grails users generally manage java dependencies? I've become accustomed to central repository dependency management, and I can't turn back at this point.

    Read the article

  • realtime diagnostics

    - by Ion Todirel
    I have an application which has a loop, part of a "Scheduler", which runs at all time and is the heart of the application. Pretty much like a game loop, just that my application is a WPF application and it's not a game. Naturally the application does logging at many points, but the Scheduler does some sensitive monitoring, and sometimes it's impossible just from the logs to tell what may have gotten wrong (and by wrong I don't mean exceptions) or the current status. Because Scheduler's inner loop runs at short intervals, you can't do file I/O-based logging (or using the Event Viewer) in there. First, you need to watch it in real-time, and secondly the log file would grow in size very fast. So I was thinking of ways to show this data to the user in the realtime, some things I considered: Display the data in realtime in the UI Use AllocConsole/WriteConsole to display this information in a console Use a different console application which would display this information, communicate between the Scheduler and the console app using pipes or other IPC techniques Use Windows' Performance Monitor and somehow feed it with this information ETW Displaying in the UI would have its issues. First it doesn't integrate with the UI I had in mind for my application, and I don't want to complicate the UI just for this. This diagnostics would only happen rarely. Secondly, there is going to be some non-trivial data protection, as the Scheduler has it's own thread. A separate console window would work probably, but I'm still worried if it's not too much threshold. Allocating my own console, as this is a windows app, would probably be better than a different console application (3), as I don't need to worry about IPC communication, and non-blocking communication. However a user could close the console I allocated, and it would be problematic in that case. With a separate process you don't have to worry about it. Assuming there is an API for Performance Monitor, it wouldn't be integrated too well with my app or apparent to the users. Using ETW also doesn't solve anything, just a random idea, I still need to display this information somehow. What others think, would there be other ways I missed?

    Read the article

  • Cloud computing?

    - by Shawn H
    I'm an analyst and intermediate programmer working for a consulting company. Sometimes we are doing some intensive computing in Excel which can be frustrating because we have slow computers. My company does not have enough money to buy everyone new computers right now. Is there a cloud computing service that allows me to login to a high performance virtual computer from remote desktop? We are not that technical so preferrably the computer is running Windows and I can run Excel and other applications from this computer. Thanks

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >