Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 481/594 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • Is it possible to serve an ASPX page without it setting a cookie on your browser?

    - by Django Reinhardt
    Hi, we're in the process of trying to speed up the performance of our website by serving static content from a cookieless domain. That seems to be going well, but I have a new question: I know that it's "static content" that we're talking about when serving it from a cookieless domain, but we also have static content being served by ASPX pages, specifically images. For example: domain.com/resizeImages.aspx?src=images/image123.jpg&width=400&height=400 Pretty standard stuff, and although it's being served by managed code, it's still a static image. So my question is: Is it ok to serve the resizeImages.aspx image from our cookieless/static domain? And if so, how do I go about stopping ASP.NET from setting a ANONYMOUSASPX cookie every time I try? Thanks for any help!

    Read the article

  • Implications of Fulltext Search over many columns

    - by Alex
    Hello, I have a really wide table which includes separate columns for billing address, shipping address, primary address, names, aliases etc. (I can't normalize this table further, and that's not the question here anyways). I'm implementing SQL Server fulltext search, and I'm wondering whether I should limit the search ability to just the primary fields (primary address and names for example), or if I can extend the search ability across all columns without occurring too much of a performance or memory penalty. I've done some basic testing with 10,000 sample rows and it's quite fast but I don't have much experience with fulltext indexing, especially its dictionary internals, so I don't know if the index is going to grow over time, or if there is anything else to consider. Thoughts?

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Tracing or Logging Resource Governor classification function behavior in Sql Server 2008

    - by nganju
    I'm trying to use the Resource Governor in SQL Server 2008 but I find it hard to debug the classification function and figure out what the input variables will have, i.e. does SUSER_NAME() contain the domain name? What does the APP_NAME() string look like? It's also hard to verify that it's working correctly. What group did the function return? The only way I can see this is to fire up the performance monitor and watch unblinkingly for little blips in the right CPU counter. Is there some way I can either run it in Debug mode, where I can set a breakpoint and step through and look at variable values, or can I at least do the old-school method of writing trace statements to a file so I can see what's going on? Thanks...

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • Multiple Producers Single Consumer Queue

    - by Talguy
    I am new to multithreading and have designed a program that receives data from two microcontroller measuring various temperatures (Ambient and Water) and draws the data to the screen. Right now the program is singly threaded and its performance SUCKS A BIG ONE. I get basic design approaches with multithreading but not well enough to create a thread to do a task but what I don't get is how to get threads to perform seperate task and place the data into a shared data pool. I figured that I need to make a queue that has one consumer and multiple producers (would like to use std::queue). I have seen some code on the gtkmm threading docs that show a single Con/Pro queue and they would lock the queue object produce data and signal the sleeping thread that it is finished then the producer would sleep. For what I need would I need to sleep a thread, would there be data conflicts if i didn't sleep any of the threads, and would sleeping a thread cause a data signifcant data delay (I need realtime data to be drawn 30 frames a sec) How would I go about coding such a queue using the gtkmm/glibmm library.

    Read the article

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Fastest tr:hover method

    - by Alex
    What is the single fastest method for table row hover css change? I've tried jQuery (onmouseover/out) and CSS with tr:hover, but once I make my page fullscreen (1920x1200) the performance on my grid is getting just sluggish enough to give the entire page a feel of being sub-par. That's on a grid with 25 rows, and some spans and divs per row. I've tried IE and Google Chrome. Is there another, faster method? What is generally considered the fastest method across browsers for doing hover CSS changes?

    Read the article

  • Available options for hosting FTP server in .NET application

    - by duane
    I need to implement an FTP service inside my .NET application (running as a Windows Service) and have not had much luck finding good/current source code or vendors. Ideally it needs to be able to respond to the basic FTP Protocol and accept the data stream from an upload via a stream, enabling me to process the data as it is being received (think on the fly hashing). I need to be able to integrate it into my service because it will stack on top of our current code base with an existing custom TCP/IP communication protocol. I don't want to write (and then spend time debugging and performance testing) my own protocol, or implementation. I have already found plenty of ftp client implementations, I just need an acceptable server solution.

    Read the article

  • Why and when should one call _fpreset( )?

    - by STingRaySC
    The only documentation I can find (on MSDN or otherwise) is that a call to _fpreset() "resets the floating-point package." What is the "floating point package?" Does this also clear the FPU status word? I see documentation that says to call _fpreset() when recovering from a SIGFPE, but doesn't _clearfp() do this as well? Do I need to call both? I am working on an application that unmasks some FP exceptions (using _controlfp()). When I want to reset the FPU to the default state (say, when calling to .NET code), should I just call _clearfp(), _fpreset(), or both. This is performance critical code, so I don't want to call both if I don't have to...

    Read the article

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Data Warehouse: One Database or many?

    - by drrollins
    At my new company, they keep all data associated with the data warehouse, including import, staging, audit, dimension and fact tables, together in the same physical database. I've been a database developer for a number of years now and this consolidation of function and form seems counter to everything I know. It seems to make security, backup/restore and performance management issues more manually intensive. Is this something that is done in the industry? Are there substantial reasons for doing or not doing it? The platform is Netezza. The size is in terabytes, hundreds of millions of rows. What I'm looking to get from answers to this question is a solid understanding of how right or wrong this path is. From your experience, what are the issues I should be focused on arguing if this is a path that will cause trouble for us down the road. If it is no big deal, then I'd like to know that as well.

    Read the article

  • Rails: getting logic to run at end of request, regardless of filter chain aborts?

    - by JSW
    Is there a reliable mechanism discussed in rails documentation for calling a function at the end of the request, regardless of filter chain aborts? It's not after filters, because after filters don't get called if any prior filter redirected or rendered. For context, I'm trying to put some structured profiling/reporting information into the app log at the end of every request. This information is collected throughought the request lifetime via instance variables wrapped in custom controller accessors, and dumped at the end in a JSON blob for use by a post-processing script. My end goal is to generate reports about my application's logical query distribution (things that depend on controller logic, not just request URIs and parameters), performance profile (time spent in specific DB queries or blocked on webservices), failure rates (including invalid incoming requests that get rejected by before_filter validation rules), and a slew of other things that cannot really be parsed from the basic information in the application and apache logs. At a higher level, is there a different "rails way" that solves my app profiling goal?

    Read the article

  • How to implement Voting for Grails Domain Classes?

    - by userWebMobile
    I have a Book class and need to implement a yes/no voting functionality. My domain classes look like this: class Book { String title static hasMany = [votes: Vote] } class User { String name static hasMany = [votes: Vote] } class Vote { boolean yesVote static belongsTo = [user: User, book: Book] } What is the best way to implement a voting for the book class. I need the following informations: What is the average yesVote for a book over all votes (either yes or no)? How to check if a specific user has done a vote? What is the best way to implement the computation of the average yesVote such that the performance does not drop?

    Read the article

  • NSMutableSet addObject question

    - by Jacob Relkin
    I've got a class that wraps around an NSMutableSet object, and I have an instance method that adds objects (using the addObject: method) to the NSMutableSet. This works well, but I'm smelling a performance hitch because inside the method i'm explicitly calling containsObject: before adding the object to the set. Three part question: Do I need to be calling containsObject: before I add an object to the set? If so, then what actual method should I be using, containsObject or containsObjectIdenticalTo:? If that is not so, what contains method gets invoked under the hood of addObject:? This is important to me because if I pass an object to containsObject: it would return true, but if I pass it to containsObjectIdenticalTo: it would return false.

    Read the article

  • How to implement square root and exponentiation on arbitrary length numbers?

    - by tomp
    I'm working on new data type for arbitrary length numbers (only non-negative integers) and I got stuck at implementing square root and exponentiation functions (only for natural exponents). Please help. I store the arbitrary length number as a string, so all operations are made char by char. Please don't include advices to use different (existing) library or other way to store the number than string. It's meant to be a programming exercise, not a real-world application, so optimization and performance are not so necessary. If you include code in your answer, I would prefer it to be in either pseudo-code or in C++. The important thing is the algorithm, not the implementation itself. Thanks for the help.

    Read the article

  • Determine if a decimal can be stored as int32

    - by anchandra
    I am doing some custom serializing, and in order to save some space, i want to serialize the decimals as int, if possible value wise. Performance is a concern, since i am dealing with a high volume of data. The current method i use is: if ((value > Int32.MinValue) && (value < Int32.MaxValue) && ((valueAsInt = Decimal.ToInt32(value)) == value)) { return true; } Can this be improved?

    Read the article

  • Is it possible to find out what FlashBuilder is doing during compilation?

    - by justkevin
    I've found that Flash Builder 4 (formerly Flex Builder) has trouble working with large projects. After a certain point, builds seem to take longer and longer. I've tried many different ways of improving build time including: Moving embedded resources into externally linked projects. Using -incremental. Tweaking the .ini jvm settings including memory and -server. Turning off automatic build (I'd prefer not to have to do this, because one of the main reasons for using an IDE is to be told about errors as you make them). Deleting the project and re-checking out from the repository. While some of these may help a bit, the performance is still annoyingly slow. I feel if I knew what was taking so long I could refactor my projects to build faster. Is there some setting that tells FlashBuilder to let me see what parts of the build process take so much time?

    Read the article

  • Running multiple jvms for different applications in same machine

    - by Rajesh
    We are getting frequent out of memory errors in our dev. machines We are running webshpere, eclipse, soap UI and maven in it. Our server gets down due to this "out of memory errors" when we restart our applications in websphere 2/3 times, We already increased the virtual memory setting in wesphere to 1GB. So what i did was copied the jre we use in eclipse and maven folders so that each of these uses individual jvms. But the performance of websphere is same. 2/3 restarts and out of memory errors. Is there any may of making eclipse and maven use different jvms other than websphere's?

    Read the article

  • Multivalue Mysql Inserts using HibernateTemplate

    - by Langali
    I am using Spring HibernateTemplate and need to insert hundreds of records into a mysql database every second. Not sure what is the most performant way of doing it, but I am trying to see how the multi value mysql inserts do using hibernate. String query = "insert into user(age, name, birth_date) values(24, 'Joe', '2010-05-19 14:33:14'), (25, 'Joe1', '2010-05-19 14:33:14')" getHibernateTemplate().execute(new HibernateCallback(){ public Object doInHibernate(Session session) throws HibernateException, SQLException { return session.createSQLQuery(query).executeUpdate(); } }); But I get this error: 'could not execute native bulk manipulation query.' Please check your query ..... Any idea of I can use a multi value mysql insert using Hibernate? or is my query incorrect? Any other ways that I can improve the performance? I did try the saveOrUpdateAll() method, and that wasn't good enough!

    Read the article

  • Retrieving data from database. Retrieve only when needed or get everything?

    - by RHaguiuda
    I have a simple application to store Contacts. This application uses a simple relational database to store Contact information, like Name, Address and other data fields. While designing it, I question came to my mind: When designing programs that uses databases, should I retrieve all database records and store them in objects in my program, so I have a very fast performance or I should always gather data only when required? Of course, retrieving all data can only be done if it`s not too many, but do you use this approach when you make sure that the database will be small (< 300 records for example)? I have designed once a similar application that fetches data only when needed, but that was slow (using a Access database). Thanks for all help.

    Read the article

  • Using Rails and Rspec, how do you test that the database is not touched by a method

    - by Will Tomlins
    So I'm writing a test for a method which for performance reasons should achieve what it needs to achieve without using SQL queries. I'm thinking all I need to know is what to stub: describe SomeModel do describe 'a_getter_method' do it 'should not touch the database' do thing = SomeModel.create something_inside_rails.should_not_receive(:a_method_querying_the_database) thing.a_getter_method end end end EDIT: to provide a more specific example: class Publication << ActiveRecord::Base end class Book << Publication end class Magazine << Publication end class Student << ActiveRecord::Base has_many :publications def publications_of_type(type) #this is the method I am trying to test. #The test should show that when I do the following, the database is queried. self.publications.find_all_by_type(type) end end describe Student do describe "publications_of_type" do it 'should not touch the database' do Student.create() student = Student.first(:include => :publications) #the publications relationship is already loaded, so no need to touch the DB lambda { student.publications_of_type(:magazine) }.should_not touch_the_database end end end So the test should fail in this example, because the rails 'find_all_by' method relies on SQL.

    Read the article

  • ContextSwitchDeadlock Was Detected error in C#

    - by assassin
    Hi, I am running a C# application, and during run-time I get the following error: The CLR has been unable to transition from COM context 0x20e480 to COM context 0x20e5f0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Can anyone please help me out with the problem here? Thanks a lot.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >