Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 481/594 | < Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >

  • C++ Urban Myths

    - by Neil Butterworth
    I'm starting to write an article on what I'm calling "C++ Urban Myths" - that is, ideas and conceptions about C++ that are common but have no actual roots in reality. Some that I've come up with so far are: TR1 is part of standard C++ TR1 (technical Report #1) proposed a whole bunch of changes to C++. Unfortunately, it was never accepted. It is faster to use iterators to access a vector than operator[] Or vice versa. All tests I've carried out indicate the two are nearly identical in performance. The C++ Standard contains something called the STL It doesn't - neither "STL" nor "Standard Template Library" appear in the Standard. I'm wondering if the SO C++ community can come up with any better ones? Ideally, they should be expressible in a single sentence, and not involve any code. Edit: I guess I didn't make it clear enough that I was interested in myths believed by C++ developers, not misconceptions held by non-C++users. Oh well...

    Read the article

  • Should we use a CSS frame work ? Are they worth it ?

    - by Gaurav M
    CSS frameworks have nice styles inbuilt and ask you to focuses on the grids but still there is a bit of dependency and lack of freedom it provide.. If I need to generate a webpage by looking on a PSD based mockup screen ..either i will use the classes provided by the framework but if that actual measurements does not exist I need to again specify my own rules that will add upto my CSS filesize and if performance is a constraint as always it is...you need not a big size file..though its in kb but every drop counts. Any comments and suggestions to use the framework in a best possible way.

    Read the article

  • Why don't scripting languages output Unicode to the Windows console?

    - by hippietrail
    The Windows console has been Unicode aware for at least a decade and perhaps as far back as Windows NT. However for some reason the major cross-platform scripting languages including Perl and Python only ever output various 8-bit encodings, requiring much trouble to work around. Perl gives a "wide character in print" warning, Pythong gives a charmap error and quits. Why on earth after all these years do they not just simply call the Win32 -W APIs that output UTF-16 Unicode instead of forcing everything through the ANSI/codepage bottleneck? Is it just that cross-platform performance is low priority? Is it that the languages use UTF-8 internally and find it too much bother to output UTF-16? Or are the -W APIs inherently broken to such a degree that they can't be used as-is?

    Read the article

  • What are block expressions actually good for?

    - by Helper Method
    I just solved the first problem from Project Euler in JavaFX for the fun of it and wondered what block expressions are actually good for? Why are they superior to functions? Is it the because of the narrowed scope? Less to write? Performance? Here's the Euler example. I used a block here but I don't know if it actually makes sense // sums up all number from low to high exclusive which are divisible by a or b function sumDivisibleBy(a: Integer, b: Integer, high: Integer) { def low = if (a <= b) a else b; def sum = { var result = 0; for (i in [low .. <high] where i mod 3 == 0 or i mod 5 == 0) { result += i } result } } Does a block makes sense here?

    Read the article

  • page control in flex(like php)

    - by Mahedi
    Hi, I'm new in flex. I have a design like this in one page have two option like this Hard & soft when i click hard(option) there will show three option(in php got to another page) like standard, square & pocket and in the page below BACK(when click this option it will back previous state(page)) option will be show When mouse over on any option's this will show its properties in any side of page when select any one of them it will go next step(page) for more performance. Soft option will work like hard option. Please help me with code example or tutorials. Best regard mahedi

    Read the article

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • Multimedia content in REST responce(XML/JSON)

    - by Koushik
    In my thesis I need to test different architectures. A request to a REST web service developed using Apache CXF and Spring MVC with MySQL as back end serving references(a field in database) to images,audio and video files stored in file system. In the response message, what is the best method to send the content to the client(another application using the service which I developed). URI: http://www.filmservices.com/film/{id} A client here is not the end user. Send the encoded hyperlink's(where the content is stored in the file system) to the client, so that the client renders the response and displays it to the browser. Use Base64 to encode the message(image,audio,video) and send it to the client. Main concern is performance.

    Read the article

  • Is the web hosting location important these days?

    - by kristof
    I was recently looking at some web hosting solutions and some of the providers offered various hosting locations e.g. US or UK based servers. My question is: does it really make a difference from the performance point of view? Lets say that I am expecting most of the traffic coming from continental Europe? Would the fact that the servers are based in UK make bigger difference if the traffic was coming from the UK. Any pros and cons of having a website hosted in the same county as the most of the expected traffic?

    Read the article

  • Filtering with joined tables

    - by viraptor
    I'm trying to get some query performance improved, but the generated query does not look the way I expect it to. The results are retrieved using: query = session.query(SomeModel). options(joinedload_all('foo.bar')). options(joinedload_all('foo.baz')). options(joinedload('quux.other')) What I want to do is filter on the table joined via 'first', but this way doesn't work: query = query.filter(FooModel.address == '1.2.3.4') It results in a clause like this attached to the query: WHERE foos.address = '1.2.3.4' Which doesn't do the filtering in a proper way, since the generated joins attach tables foos_1 and foos_2. If I try that query manually but change the filtering clause to: WHERE foos_1.address = '1.2.3.4' AND foos_2.address = '1.2.3.4' It works fine. The question is of course - how can I achieve this with sqlalchemy itself?

    Read the article

  • mod_perl memory leak

    - by marghi
    Hello, I recently discovered that one of our sites has a memory leak in it, it's very strange because it happened all of the sudden. I've used GTop to measure the memory size per process and it tells me that the real value is somewhere around 65 MB (on the server) per request and and additional 5 MB shared. I tried preloading the modules in the startup.pl file a indicated in the performance tuning article for mod_perl. Nothing happened if fact the shared memory decreased down to 3.7 MB, in this situation I thought that my application is leaking memory do before any line of code got executed I measured the memory just to find out that the total value is in fact 64 MB, my questions are: Is there a default preallocation of memory for each process? Is there a configuration issue? Is mod_perl leaking memory ? Thank you very much.

    Read the article

  • ContextSwitchDeadlock Was Detected error in C#

    - by assassin
    Hi, I am running a C# application, and during run-time I get the following error: The CLR has been unable to transition from COM context 0x20e480 to COM context 0x20e5f0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Can anyone please help me out with the problem here? Thanks a lot.

    Read the article

  • Fastest tr:hover method

    - by Alex
    What is the single fastest method for table row hover css change? I've tried jQuery (onmouseover/out) and CSS with tr:hover, but once I make my page fullscreen (1920x1200) the performance on my grid is getting just sluggish enough to give the entire page a feel of being sub-par. That's on a grid with 25 rows, and some spans and divs per row. I've tried IE and Google Chrome. Is there another, faster method? What is generally considered the fastest method across browsers for doing hover CSS changes?

    Read the article

  • entity framework and dirty reads

    - by bryanjonker
    I have Entity Framework (.NET 4.0) going against SQL Server 2008. The database is (theoretically) getting updated during business hours -- delete, then insert, all through a transaction. Practically, it's not going to happen that often. But, I need to make sure I can always read data in the database. The application I'm writing will never do any types of writes to the data -- read-only. If I do a dirty read, I can always access the data; the worst that happens is I get old data (which is acceptable). However, can I tell Entity Framework to always use dirty reads? Are there performance or data integrity issues I need to worry about if I set up EF this way? Or should I take a step back and see about rewriting the process that's doing the delete/insert process?

    Read the article

  • Are you using Virtual Machine as your primary development enviroment?

    - by Click Ok
    Recently I have purchased a notebook that cames with Windows Home Basic (that don't have with ASP.Net/IIS. I thought in upgrade the Windows version to one with ASP.Net/IIS, but I thought in another possibility: I have an Hard Disk Case with a 360Gb HD. I thought in create a virtual machine with Windows Ultimate (installing too ASP.Net, IIS and Visual Studio 2008) in this HD Case, then I can access my "development enviroment" in any computer that I will work on (my desktop machine and my notebook). But I was worried about the performance. I don't have experience working in Virtual Machines (I use it just to quick compatibility tests)... Are you using Virtual Machine as your primary development enviroment? What your finds? ==================== Thanks for your answers! It really did help me! I would like to know too about portability ie.: the virtual machine that I created in my laptop will work in the desktop? I will need re-activate Windows?

    Read the article

  • Is it possible to serve an ASPX page without it setting a cookie on your browser?

    - by Django Reinhardt
    Hi, we're in the process of trying to speed up the performance of our website by serving static content from a cookieless domain. That seems to be going well, but I have a new question: I know that it's "static content" that we're talking about when serving it from a cookieless domain, but we also have static content being served by ASPX pages, specifically images. For example: domain.com/resizeImages.aspx?src=images/image123.jpg&width=400&height=400 Pretty standard stuff, and although it's being served by managed code, it's still a static image. So my question is: Is it ok to serve the resizeImages.aspx image from our cookieless/static domain? And if so, how do I go about stopping ASP.NET from setting a ANONYMOUSASPX cookie every time I try? Thanks for any help!

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Lots of dropped packages when tcpdumping on busy interface

    - by Frands Hansen
    My challenge I need to do tcpdumping of a lot of data - actually from 2 interfaces left in promiscuous mode that are able to see a lot of traffic. To sum it up Log all traffic in promiscuous mode from 2 interfaces Those interfaces are not assigned an IP address pcap files must be rotated per ~1G When 10 TB of files are stored, start truncating the oldest What I currently do Right now I use tcpdump like this: tcpdump -n -C 1000 -z /data/compress.sh -i any -w /data/livedump/capture.pcap $FILTER The $FILTER contains src/dst filters so that I can use -i any. The reason for this is, that I have two interfaces and I would like to run the dump in a single thread rather than two. compress.sh takes care of assigning tar to another CPU core, compress the data, give it a reasonable filename and move it to an archive location. I cannot specify two interfaces, thus I have chosen to use filters and dump from any interface. Right now, I do not do any housekeeping, but I plan on monitoring disk and when I have 100G left I will start wiping the oldest files - this should be fine. And now; my problem I see dropped packets. This is from a dump that has been running for a few hours and collected roughly 250 gigs of pcap files: 430083369 packets captured 430115470 packets received by filter 32057 packets dropped by kernel <-- This is my concern How can I avoid so many packets being dropped? These things I did already try or look at Changed the value of /proc/sys/net/core/rmem_max and /proc/sys/net/core/rmem_default which did indeed help - actually it took care of just around half of the dropped packets. I have also looked at gulp - the problem with gulp is, that it does not support multiple interfaces in one process and it gets angry if the interface does not have an IP address. Unfortunately, that is a deal breaker in my case. Next problem is, that when the traffic flows though a pipe, I cannot get the automatic rotation going. Getting one huge 10 TB file is not very efficient and I don't have a machine with 10TB+ RAM that I can run wireshark on, so that's out. Do you have any suggestions? Maybe even a better way of doing my traffic dump altogether.

    Read the article

  • What are some "mental steps" a developer must take to begin moving from SQL to NO-SQL (CouchDB, Fath

    - by Byron Sommardahl
    I have my mind firmly wrapped around relational databases and how to code efficiently against them. Most of my experience is with MySQL and SQL. I like many of the things I'm hearing about document-based databases, especially when someone in a recent podcast mentioned huge performance benefits. So, if I'm going to go down that road, what are some of the mental steps I must take to shift from SQL to NO-SQL? If it makes any difference in your answer, I'm a C# developer primarily (today, anyhow). I'm used to ORM's like EF and Linq to SQL. Before ORMs, I rolled my own objects with generics and datareaders. Maybe that matters, maybe it doesn't. Here are some more specific: How do I need to think about joins? How will I query without a SELECT statement? What happens to my existing stored objects when I add a property in my code? (feel free to add questions of your own here)

    Read the article

  • Why should I use an N-Tier Approach When using an SqlDatasource is ALOT EASIER ?

    - by The_AlienCoder
    When it comes to web development I have always tried to work SMART not HARD. So for along time My Aproach to interacting with databases in my AspNet projects has been this : 1) Create my stored procedures 2) Drag an SQLDatasource control on my aspx page 3) Bind a DataList Control to my SQLDatasource 4) Insert, Update & Delete by using my Datalist or programmatically using built in SQLDatasource methods e.g MySqlDataSource.InsertParameters["author"].DefaultValue = TextBox1.Text; MySqlDataSource.Insert(); Recently however I got a relatively easy web project. So I decided to employ a 3-tier Model...But I got exhausted halfway and just didnt seem worth it ! It seemed like I was working too HARD for a project that could have been easily accomplished by a couple of SqlDataSource Controls. So Why Is the N-Tier Model better than my Approach? Has it anything to do with performance? What are the advantages of the ObjectDataSource control over the SqlDataSource Control?

    Read the article

  • Why and when should one call _fpreset( )?

    - by STingRaySC
    The only documentation I can find (on MSDN or otherwise) is that a call to _fpreset() "resets the floating-point package." What is the "floating point package?" Does this also clear the FPU status word? I see documentation that says to call _fpreset() when recovering from a SIGFPE, but doesn't _clearfp() do this as well? Do I need to call both? I am working on an application that unmasks some FP exceptions (using _controlfp()). When I want to reset the FPU to the default state (say, when calling to .NET code), should I just call _clearfp(), _fpreset(), or both. This is performance critical code, so I don't want to call both if I don't have to...

    Read the article

  • Has Object in VB 2010 received the same optimalization as dynamic in C# 4.0?

    - by Abel
    Some people have argued that the C# 4.0 feature introduced with the dynamic keyword is the same as the "everything is an Object" feature of VB. However, any call on a dynamic variable will be translated into a delegate once and from then on, the delegate will be called. In VB, when using Object, no caching is applied and each call on a non-typed method involves a whole lot of under-the-hood reflection, sometimes totaling a whopping 400-fold performance penalty. Have the dynamic type delegate-optimization and caching also been added to the VB untyped method calls, or is VB's untyped Object still so slow?

    Read the article

  • How to update only certain items in a list when using MVC?

    - by Eugen
    I'm building a GUI that includes a list with quite a lot of items. I allow the user to add/delete/edit those items. Up until now my update method called in the controller implied an entire JList reset (with its obvious performance issues). Now that there are hundreds of items available, updating the entire list is not fezable any longer. Does anyone know of a tutorial or can share an example (I haven't found any to suit my needs so far) in which the JList is updated something like JList.update(startIndex, endIndex);? Thanks for taking the time to answer.

    Read the article

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • Multiple Producers Single Consumer Queue

    - by Talguy
    I am new to multithreading and have designed a program that receives data from two microcontroller measuring various temperatures (Ambient and Water) and draws the data to the screen. Right now the program is singly threaded and its performance SUCKS A BIG ONE. I get basic design approaches with multithreading but not well enough to create a thread to do a task but what I don't get is how to get threads to perform seperate task and place the data into a shared data pool. I figured that I need to make a queue that has one consumer and multiple producers (would like to use std::queue). I have seen some code on the gtkmm threading docs that show a single Con/Pro queue and they would lock the queue object produce data and signal the sleeping thread that it is finished then the producer would sleep. For what I need would I need to sleep a thread, would there be data conflicts if i didn't sleep any of the threads, and would sleeping a thread cause a data signifcant data delay (I need realtime data to be drawn 30 frames a sec) How would I go about coding such a queue using the gtkmm/glibmm library.

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

< Previous Page | 477 478 479 480 481 482 483 484 485 486 487 488  | Next Page >