Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 650/883 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • What are the pros and cons to keeping SQL in Stored Procs versus Code

    - by Guy
    What are the advantages/disadvantages of keeping SQL in your C# source code or in Stored Procs? I've been discussing this with a friend on an open source project that we're working on (C# ASP.NET Forum). At the moment, most of the database access is done by building the SQL inline in C# and calling to the SQL Server DB. So I'm trying to establish which, for this particular project, would be best. So far I have: Advantages for in Code: Easier to maintain - don't need to run a SQL script to update queries Easier to port to another DB - no procs to port Advantages for Stored Procs: Performance Security

    Read the article

  • Android EditText and addTextChangedListener

    - by Alex
    im currently porting a database manager to android and due to performance reasons i like to update only propertys that have been modified. Im trying to do this with the addTextChangedListener in order to add modified entrys to a List, but my Program never enters any of its methods. EditText Et = (EditText) Editors.get(Prop.Name); Et.addTextChangedListener(new TextWatcher() { @Override public void afterTextChanged(Editable s) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub if(Prop.GetType() == Property.PROPTYPE.num) { float f = Float.parseFloat(s.toString()); Prop.FromString(f); } else { Prop.FromString(s.toString()); } propertiesToUpdate.add(Prop); }); Et.setText(Prop.ToString());

    Read the article

  • What messaging technologies in windows-ce for gauranteed msg delivery?

    - by Aidanapword
    All, We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (research on using this in our context is now starting) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too. Thanks! Aidanapword

    Read the article

  • mod_rewrite in conjunction with "options indexes"

    - by Travis
    I have a directory ("files") where sub-directories and files are going to be created and stored over time. The directories also need to deliver a directory listing, using "options indexes", but only if a user is authenticated, and authorized. I have that part built, and working, by doing the following: <Directory /var/www/html/files> Options Indexes IndexOptions FancyIndexing SuppressHTMLPreamble HeaderName /includes/autoindex/auth.php </Directory> Now I need to take care of file delivery. To force authentication for files, I have built the following: RewriteCond %{REQUEST_URI} -f RewriteRule /files/(.*) /auth.php I also tried: RewriteCond %{REQUEST_URI} !-d RewriteRule /files/(.*) /auth.php Both directives are redirecting to auth.php when I request: foo.com/files/bar/ foo.com/files/bar/baz I am outputting the SERVER global on auth.php during testing and it is showing the requests as I made them (I thought Apache may have been doing something behind the scenes by adding something like "index.html" to the end with "Options Indexes" being on). Ideas?

    Read the article

  • Facebook: "URL could not be liked because it has been blocked" error in iOS app

    - by Andrew Madsen
    I'm working on an iOS app that allows the user to like a Facebook page within the app. I've implemented this using FacebookLikeView. During the course of testing this functionality, I've liked and unliked the same page multiple times. Unfortunately, this seems to have triggered Facebook's spam detection. Now, when trying to like a page using the like button displayed by FacebookLikeView, the following error is presented: "URL could not be liked because it has been blocked". Based on reports of the same problem found by searching the web, I've filled out this form to request that Facebook remove the block. However, I've received no response from them. I'm not sure how to proceed. Has anyone else run into this issue and successfully solved it?

    Read the article

  • How to mark empty in a single element in a float array

    - by Vineeth Mohan
    I have a large float (primitive) array and not every element in the array is filled. How can i mark a particular element as EMPTY. I understand this can be achieved by some special symbols but still i would like to know the standard way. Even if i am using some special symbol , how will i handle a situation where the actual data item is the value of special symbol. In short my question is how to implement the NULL feature in a primitive type array in java. PS - The reason why i am not using Float object is to achieve a high memory and speed performance. Thanks Vineeth

    Read the article

  • Why is the 'if' statement considered evil?

    - by Vadim
    I just came from Simple Design and Testing Conference. In one of the session we were talking about evil keywords in programming languages. Corey Haines, who proposed the subject, was convinced that if statement is absolute evil. His alternative was to create functions with predicates. Can you please explain to me why if is evil. I understand that you can write very ugly code abusing if. But I don't believe that it's that bad.

    Read the article

  • Can I allow a write access to a particular registry key without elevation?

    - by 280Z28
    I develop an extension for Visual Studio 2005, 2008, and 2010. The Visual Studio 2005 SDK requires write access to the following registry key during builds. The normal way to handle this run Visual Studio with elevated privileges. The entire issue can be avoided if there's some way I can set permissions to allow access to this particular registry key without elevation: HKLM\SOFTWARE\Microsoft\VisualStudio\8.0Exp Side note: This key is only used for testing Visual Studio 2005 extensions. The issue does not occur on client machines so this is just a workaround for my own development machine.

    Read the article

  • Checking whether an object exists in StructureMap container

    - by Kevin Pang
    I'm using StructureMap to handle the creation of NHibernate's ISessionFactory and ISession. I've scoped ISessionFactory as a singleton so that it's only created once for my web app and I've scoped ISession as a hybrid so that it will only be opened once per web request. I want to make sure that at the end of each web request, I properly dispose of ISession if it was created for that web request. I figured I could put some code in my Application_EndRequest routine to first check if an ISession was created, and if so, call ISession.Dispose. My current workaround is to just open up an ISession on Application_BeginRequest then dispose of it on Application_EndRequest, but that seems somewhat wasteful in that static file requests for images and css files and whatnot will create an ISession without ever using it. I know that the overall performance hit is negligable since ISessions are very lightweight, but it's getting annoying seeing all those ISessions being created inside NHProf.

    Read the article

  • Is VS2010 Premium Worth the Price?

    - by WindyCityEagle
    I know this is somewhat subjective, but I can't find an honest answer anywhere. Everything concerning VS2010 are Microsoft marketing materials. Our small group is going to upgrade to VS2010(mostly for F# and the new threading features), but we can't decide between the Professional and Premium versions. The integrated testing features in Premium sound good, but I can' figure out if they're worth the 10x increase in cost between the two versions(Professional is ~549, Premium is ~5400). Has anyone been faced with a similar decision? What swayed you one way or the other?

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • Ajax call in a jQuery plugin not working properly

    - by Saneef
    I'm trying to create a jQuery plugin, inside I need to do an AJAX call to load an xml. jQuery.fn.imagetags = function(options) { s = jQuery.extend({ height:null, width:null, url:false, callback:null, title:null, }, options); return this.each(function(){ obj = $(this); //Initialising the placeholder $holder = $('<div />') .width(s.width).height(s.height) .addClass('jimageholder') .css({ position: 'relative', }); obj.wrap($holder); $.ajax({ type: "GET", url: s.url, dataType: "xml", success:function(data){ initGrids(obj,data,s.callback,s.title); } , error: function(data) { alert("Error loading Grid data."); }, }); function initGrids(obj, data,callback,gridtitle){ if (!data) { alert("Error loading Grid data"); } $("gridlist gridset",data).each(function(){ var gridsetname = $(this).children("setname").text(); var gridsetcolor = ""; if ($(this).children("color").text() != "") { gridsetcolor = $(this).children("color").text(); } $(this).children("grid").each(function(){ var gridcolor = gridsetcolor; //This colour will override colour set for the grid set if ($(this).children("color").text() != "") { gridcolor = $(this).children("color").text(); } //addGrid(gridsetname,id,x,y,height,width) addGrid( obj, gridsetname, $(this).children("id").text(), $(this).children("x").text(), $(this).children("y").text(), $(this).children("height").text(), $(this).children("width").text(), gridcolor, gridtitle ); }); }); } function addGrid(obj,gridsetname,id,x,y,height,width,color,gridtitle){ //To compensate for the 2px border height-=4; width-=4; $grid = $('<div />') .addClass(gridsetname) .attr("id",id) .addClass('gridtag') .imagetagsResetHighlight() .css({ "bottom":y+"px", "left":x+"px", "height":height+"px", "width":width+"px", }); if(gridtitle != null){ $grid.attr("title",gridtitle); } if(color != ""){ $grid.css({ "border-color":color, }); } obj.after($grid); } }); } The above plugin I bind with 2 DOM objects and loads two seperate XML files but the callback function is run only on the last DOM object using both loaded XML files. How can I fix this, so that the callback is applied on the corresponding DOMs. Is the above ajax call is correct? Sample usage: <script type="text/javascript"> $(function(){ $(".romeo img").imagetags({ height:500, width:497, url: "sample-data.xml", title: "Testing...", callback:function(id){ console.log(id); }, }); }); </script> <div class="padding-10 min-item background-color-black"> <div class="romeo"><img src="images/samplecontent/test_500x497.gif" alt="Image"> </div> </div> <script type="text/javascript"> $(function(){ $(".romeo2 img").imagetags({ height:500, width:497, url: "sample-data2.xml", title: "Testing...", callback:function(id){ console.log(id); }, }); }); </script> <div class="padding-10 min-item background-color-black"> <div class="romeo2"><img src="images/samplecontent/test2_500x497.gif" alt="Image"> </div> </div> Here is the sample XML data: <?xml version="1.0" encoding="utf-8"?> <gridlist> <gridset> <setname>gridset4</setname> <color>#00FF00</color> <grid> <color>#FF77FF</color> <id>grid2-324</id> <x>300</x> <y>300</y> <height>60</height> <width>60</width> </grid> </gridset> <gridset> <setname>gridset3</setname> <color>#00FF00</color> <grid> <color>#FF77FF</color> <id>grid2-212</id> <x>300</x> <y>300</y> <height>100</height> <width>100</width> </grid> <grid> <color>#FF77FF</color> <id>grid2-1212</id> <x>200</x> <y>10</y> <height>200</height> <width>10</width> </grid> </gridset> </gridlist>

    Read the article

  • MySQL - are FK's useful / viable in a web app?

    - by yoda
    Hi all, I've encountered this discussion related to FK's and web applications. Basically some people say that FK's in web applications doesn't represent a real improvement and can even make the application slower in some cases. What do you guys think, what's your experience? -- A quote from Heikki Tuuri, creator of InnoDB engine, founder and CEO of Innobase: InnoDB checks foreign keys as soon as a row is updated, no batching is performed or checks delayed till transaction commit Foreign keys are often serious performance overhead, but help maintain data consistency Foreign Keys increase amount of row level locking done and can make it spread to a lot of tables besides the ones directly updated

    Read the article

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • Using SQL Server for WSS 3.0 instead of Windows Internal database

    - by val
    Hi Folks, There are actually two related questions: is it possible or advisable to use a full blown stand-alone SQL server for SharePoint Services WSS3.0 instead of the supplied windows internal database it comes with? The client I am working for is asking to utilize their existent SQL server for all WSS content databases to possibly minimize admin effort and improve performance. As well, would you advise to install WSS on one physical server and the content database on another server? Any gain in performace? Practicality? ect. The default is WSS and all of its databases are installed on the same single server. We don't really need a farm setup of MOSS, because the WSS capabilities are enough for our needs. Thanks, Val

    Read the article

  • .Net Logger (Write your own vs log4net/enterprise logger/nlog etc.)

    - by Jack
    I work for an IT department with about 50+ developers. It used to be about 100+ developers but was cut because of the recession. When our department was bigger there was an ambitious effort made to set up a special architecture group. One thing this group decided to do was create our own internal logger. They thought it was such a simple task that we could spend recources and do it ourselves. Now we are having issues with performance and difficulty viewing the logs generated and some employees are frustrated that we are spending recources on infrastructure stuff like this instead of focusing on serving our business and using stuff that already exists like log4net or Enterprise Logger. Can you assist me in listing up reasons why you should not create your own .net logger. Also reasons for why you should are welcome to get a fair point of view :)

    Read the article

  • What is design principle behind Servlets being Singleton

    - by Sandeep Jindal
    A servlet container "generally" create one instance of a servlet and different threads of the same instance to serve multiple requests. (I know this can be changed using deprecated SingleThreadModel and other features, but this is the usual way). I thought, the simple reason behind this is performance gain, as creating threads is better than creating instances. But it seems this is not the reason. On the other hand, creating instances have little advantage that developers never have to worry about thread safety. I am trying to understand the reason for this decision over the trade-off of thread-safety.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • Reading cookies across different hosts

    - by Thinker
    I have two sites - both are my projects. On site two, I need to check if the user is logged in on site one. I suppose to do this I should just create a script that puts a cookie into the body of an iframe and then read the iframe contents on site two. But I can't. Here is a code I made for testing purposes: http://jsbin.com/oqaza/edit I got an error, that says: "Permission denied for <http://jsbin.com to get property HTMLDocument.nodeType from <http://www.google.com."

    Read the article

  • GWT : Type of Container

    - by moorsu
    I see that there are two ways of transferring objects from server to client Use the same domain object (Contact.java) as used in the service layer. (I do not use hibernate) Use the HashMap to send the domain object field values in the form of Map with the help of BeanUtilsBean class. For multiple objects, use the List. Similary, use the Map to submit form values from client to server Is there any performance advantage for option 1 over 2?. Is there a way to hide the classname/package name that is sent to the browser if we use option 1?. thanks!.

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • is jQuery 1.4.2 compatible with Closure Compiler?

    - by Mohammad
    According to the official release statement version 1.4 has been re-written to be compressed with Closure Compiler yet when I use the online version of closure compiler I get 130 warnings. This is the code I use. // ==ClosureCompiler== // @compilation_level ADVANCED_OPTIMIZATIONS // @output_file_name default.js // @code_url http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.js // ==/ClosureCompiler== And as far as I know you get the real benefit of Closure Compiler if you include the library with your code also, so it removes the unused functions. Yet my testing show that I can't get any further than compressing the library itself.. What am I doing wrong? Any kind of insight will be much appreciated.

    Read the article

  • Silverlight/Web Service Serializing Interface for use Client Side

    - by Steve Brouillard
    I have a Silverlight solution that references a third-party web service. This web service generates XML, which is then processed into objects for use in Silverlight binding. At one point we the processing of XML to objects was done client-side, but we ran into performance issues and decided to move this processing to the proxies in the hosting web project to improve performance (which it did). This is obviously a gross over-simplification, but should work. My basic project structure looks like this. Solution Solution.Web - Holds the web page that hosts Silverlight as well as proxies that access web services and processes as required and obviously the references to those web services). Solution.Infrastructure - Holds references to the proxy web services in the .Web project, all genned code from serialized objects from those proxies and code around those objects that need to be client-side. Solution.Book - The particular project that uses the objects in question after processed down into Infrastructure. I've defined the following Interface and Class in the Web project. They represent the type of objects that the XML from the original third-party gets transformed into and since this is the only project in the Silverlight app that is actually server-side, that was the place to define and use them. //Doesn't get much simpler than this. public interface INavigable { string Description { get; set; } } //Very simple class too public class IndexEntry : INavigable { public List<IndexCM> CMItems { get; set; } public string CPTCode { get; set; } public string DefinitionOfAbbreviations { get; set; } public string Description { get; set; } public string EtiologyCode { get; set; } public bool HighScore { get; set; } public IndexToTabularCommandArguments IndexToTabularCommandArgument { get; set; } public bool IsExpanded { get; set; } public string ManifestationCode { get; set; } public string MorphologyCode { get; set; } public List<TextItem> NonEssentialModifiersAndQualifyingText { get; set; } public string OtherItalics { get; set; } public IndexEntry Parent { get; set; } public int Score { get; set; } public string SeeAlsoReference { get; set; } public string SeeReference { get; set; } public List<IndexEntry> SubEntries { get; set; } public int Words { get; set; } } Again; both of these items are defined in the Web project. Notice that IndexEntry implments INavigable. When the code for IndexEntry is auto-genned in the Infrastructure project, the definition of the class does not include the implmentation of INavigable. After discovering this, I thought "no problem, I'll create another partial class file reiterating the implmentation". Unfortunately (I'm guessing because it isn't being serialized), that interface isn't recognized in the Infrastructure project, so I can't simply do that. Here's where it gets really weird. The BOOK project CAN see the INavigable interface. In fact I use it in Book, though Book has no reference to the Web Service in the Web project where the thing is define, though Infrastructure does. Just as a test, I linked to the INavigable source file from indside the Infrastructure project. That allowed me to reference it in that project and compile, but causes havoc in the Book project, because now there's a conflick between the one define in Infrastructure and the one defined in the Web project's web service. This is behavior I would expect. So, to try and sum up a bit. Web project has a web service that process data from a third-party service and has a class and interface defined in it. The class implements the interface. The Infrastructure project references the web service in the Web Project and the Book project references the Infrastructure project. The implmentation of the interface in the class does NOT serialize down, so the auto-genned code in INfrastructure does not show this relationship, breaking code further down-stream. The Book project, whihc is further down-stream CAN see the interface as defined in the Web Project, even though its only reference is through the Infrastructure project; whihc CAN'T see it. Am I simple missing something easy here? Can I apply an attribute to either the Interface definition or to the its implmentation in the class to ensure its visibility downstream? Anything else I can do here? I know this is a bit convoluted and anyone still with me here, thanks for your patience and any advice you might have. Cheers, Steve

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • git: having 2 push/pull repos in sync (or 1 push/pull and 1 pull in sync)

    - by xavjuan
    Hello, We work on multiple geographically seperate sites. Today I have our git clones all live on one site A. Then users from site B have to ssh over to do a git clone or to push in changes. These are bare repos where the update is through pushes. Ideally, for git clone/push performance, I'd like to limit having to go over ssh. I'd like to have a copy of git repo X live on site A and site B... and have some syncing mechanism between them. OR to have X live on both sites, but only allow pushing to A (and have that setup correctly at clone time on B) I'm worried about the case where someone on site A pushes changes to the repo at site A at the same time that someone on site B pushes a truely conflicting change to the repo at site B. Is there some 'sync'ing solution built into git for distributed open repos like this? Or a way to have a clone from X set the origin/parent to the X from the other site? thanks, -John

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >