Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 477/594 | < Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >

  • Speed up the loop operation in R

    - by Kay
    Hi, i have a big performance problem in R. I wrote a function that iterates over an data.frame object. It simply adds a new col to a data.frame and accumulate sth. (simple operation). The data.frame has round about 850.000 rows. My PC is still working about 10h now and i have no idea about the runtime. dayloop2 <- function(temp){ for (i in 1:nrow(temp)){ temp[i,10] <- i if (i > 1) { if ((temp[i,6] == temp[i-1,6]) & (temp[i,3] == temp[i-1,3])) { temp[i,10] <- temp[i,9] + temp[i-1,10] } else { temp[i,10] <- temp[i,9] } } else { temp[i,10] <- temp[i,9] } } names(temp)[names(temp) == "V10"] <- "Kumm." return(temp) } Any ideas how to speed up this operation ?

    Read the article

  • How would MVVM be for games?

    - by Benny Jobigan
    Particularly for 2d games, and particularly silverlight/wpf games. If you think about it, you can divide a game object into its view (the graphic on the screen) and a view-model/model (the state, ai, and other data for the object). In silverlight, it seems common to make each object a user control, putting the model and view into a single object. I suppose the advantage of this is simplicity. But, perhaps it's less clean or has some disadvantages in terms of the underlying "game engine". What are your thoughts on this matter? What are some advantages and disadvantages of using the MVVM pattern for game development? How about performance? All thoughts are welcome.

    Read the article

  • Why does Java not have any destructor like C++?

    - by Abhishek Jain
    Java has its own garbage collection implementation so it does not require any destructor like C++ . This makes Java developer lazy in implementing memory management. And Garbage Collection is very expensive. Still we can have destructor along with garbage collector where developer can free resources and which can save garbage collector's work. This might improves the performance of application. Why does Java not provide any destructor kind of mechanism? Developer does not have control over GC but he/she can control or create object. Then why not give them ability to destruct the objects?

    Read the article

  • Lock HTML select element, allow value to be sent on submit

    - by ILMV
    I have a select box (for a customer field) on a complex order form, when the user starts to add lines to the order they should not be allowed to change the customer select box (unless all lines are deleted). My immediate thought was that I could use the disabled attribute, but when the box is disabled the selected value is no longer passed to the target. When the problem arose a while ago one of the other developers worked around this by looping through all the options and disabling all but the selected option, and sure enough the value was passed to the target and we've been using since. But now I'm looking for a proper solution, I don't want to loop through all the options because are data is expanding and it's starting to introduce performance issues. I'd prefer not to enable this / all the elements when the submit button is hit. How can I lock the input, whilst maintaining the selected option and passing that value to the target script? I would prefer a non-JavaScript solution if possible, but if needed we are running jQuery 1.4.2 so that could be used.

    Read the article

  • Google Web Optimizer -- How long until winning combination?

    - by Django Reinhardt
    I've had an A/B Test running in Google Web Optimizer for six weeks now, and there's still no end in sight. Google is still saying: "We have not gathered enough data yet to show any significant results. When we collect more data we should be able to show you a winning combination." Is there any way of telling how close Google is to making up its mind? (Does anyone know what algorithm does it use to decide if there's been any "high confidence winners"?) According to the Google help documentation: Sometimes we simply need more data to be able to reach a level of high confidence. A tested combination typically needs around 200 conversions for us to judge its performance with certainty. But all of our conversions have over 200 conversations at the moment: 230 / 4061 (Original) 223 / 3937 (Variation 1) 205 / 3984 (Variation 2) 205 / 4007 (Variation 3) How much longer is it going to have to run?? Thanks for any help.

    Read the article

  • How do I temporarily monkey with a global module constant?

    - by Daniel
    Greetings, I want to tinker with the global memcache object, and I found the following problems. Cache is a constant Cache is a module I only want to modify the behavior of Cache globally for a small section of code for a possible major performance gain. Since Cache is a module, I can't re-assign it, or encapsulate it. I Would Like To Do This: Deep in a controller method... code code code... old_cache = Cache Cache = MyCache.new code code code... Cache = old_cache code code code... However, since Cache is a constant I'm forbidden to change it. Threading is not an issue at the moment. :) Would it be "good manners" for me to just alias_method the special code I need just for a small section of code and then later unalias it again? That doesn't pass the smell test IMHO. Does anyone have any ideas? TIA, -daniel

    Read the article

  • Interpreters: How much simplification?

    - by Ray
    In my interpreter, code like the following x=(y+4)*z echo x parses and "optimizes" down to four single operations performed by the interpreter, pretty much assembly-like: add 4 to y multiply <last operation result> with z set x to <last operation result> echo x In modern interpreters (for example: CPython, Ruby, PHP), how simplified are the "opcodes" for which are in end-effect run by the interpreter? Could I achieve better performance when trying to keep the structures and commands for the interpreter more complex and high-level? That would be surely a lot harder, or?

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Is loading a video in a browser multithreaded?

    - by mwilcox
    It's hard to know what is multithreaded in a browser and what isn't. It seems while a video streams or progressively downloads, it does not affect page performance, so my guess it is. Note I'm using Flash video, but it's really about video in general. Any other tips on what else is multithreaded (image loads?) is also helpful. I know JavaScript is not, and I thought Flash wasn't but I heard somewhere that it may be (or it could be done), but I think they were not well informed.

    Read the article

  • What goes between SQL Server and Client?

    - by worlds-apart89
    This question is an updated version of a previous question I have asked on here. I am new to client-server model with SQL Server as the relational database. I have read that public access to SQL Server is not secure. If direct access to the database is not a good practice, then what kind of layer should be placed between the server and the client? Note that I have a desktop application that will serve as the client and a remote SQL Server database that will provide data to the client. The client will input their username and password in order to see their data. I have heard of terms like VPN, ISA, TMG, Terminal Services, proxy server, and so on. I need a fast and secure n-tier architecture. P.S. I have heard of web services in front of the database. Can I use WCF to retrieve, update, insert data? Would it be a good approach in terms of security and performance?

    Read the article

  • Reading files from an embedded ZIP archive

    - by aix
    I have a ZIP archive that's embedded inside a larger file. I know the archive's starting offset within the larger file and its length. Are there any Java libraries that would enable me to directly read the files contained within the archive? I am thinking along the lines of ZipFile.getInputStream(). Unfortunately, ZipFile doesn't work for this use case since its constructors require a standalone ZIP file. For performance reasons, I cannot copy the ZIP achive into a separate file before opening it. edit: Just to be clear, I do have random access to the file.

    Read the article

  • std::for_each on a member function with 1 argument

    - by Person
    I'm wondering how to implement what is stated in the title. I've tried something like... std::for_each( a.begin(), a.end(), std::mem_fun_ref( &myClass::someFunc ) ) but I get an error saying that the "term" (I"m assuming it means the 3rd argument) doesn't evaluate to a function with 1 argument, even though someFunc does take one argument - the type of the objects stored in a. I'm wondering if what I'm trying to do is possible using the standard library (I know I can do it easily using boost). P.S. Does using for_each and mem_fun_ref have any performance implications in comparison to just iterating through a manually and passing the object to someFunc?

    Read the article

  • Possible to InvalidateVisual() on a given region instead of entire WPF control?

    - by Scott Bilas
    I have a complex WPF control that draws a lot of primitives in its OnRender (it's sort of like a map). When a small portion of it changes, I'd only like to re-issue render commands for the affected elements, instead of running the entire OnRender over. While I'm fine with my OnRender function's performance on a resize or whatever, it's not fast enough for mouse hover-based highlighting of primitives. Currently the only way I know how to force a screen update is to call InvalidateVisual(). No way to send in a dirty rect region to invalidate. Is the lowest granularity of WPF screen composition the UI element? Will I need to do my renders of primitives into an intermediate target and then have that use InvalidateVisual() to update to the screen?

    Read the article

  • Database indexes - what should they be

    - by WebweaverD
    Most of my database tables have a clear unique index through which lookups are done 90% of the time but I am a bit unsure on this one - I have a table which keeps track of user rating totals for items in my database, I now want to add another table, to track individual ratings with an ip address column to make sure no one can rate something twice. Since I can see this becoming a big, high use table it is important to optimize it correctly. (MYSQL table) This table will have the following fields: rating_id(always - unique), item_id (always - not unique), user_id (optional - not unique), ip_address (always - not unique), rating_value(always - not unique), has_review(bool) Now I envisions 90% the queries going something like this: When a user rates something - select where item_id = x and ip_address = y, (if rows = 0) insert rating When in user account pages - select where ip_address = x or username = y Now none of the fields searched on are unique, can I still use them as indexes (for example item _id and ip_address), can I have two indexes and will this still improve performance over a non indexed table?

    Read the article

  • Is there such a thing as too many tables?

    - by Stacey
    I've been searching stackoverflow for about an hour now and couldn't find any topics related, so I apologize if this is a duplicate question. My inquery is this. Is there a point at which there are too many tables in a database? Even if the structure is well organized, thought out, and perfectly facilitates the design intent? I have a database that is quickly approaching 40 tables - about 10 main ones, and over 30 ancillary tables (junction tables, 'enumeration' tables, etc). Am I just a bad developer - or should I be trying something different? It seems like so many to me, I'm really afraid at how it will impact the performance of the project. I have done a lot of condensing where possible, grouped similar things where possible, etc. The database is built in MS-SQL 2008.

    Read the article

  • Hibernate criteria with projection not performing query for @OneToMany mapping

    - by Josh
    I have a domain object, Expense, that has a field called initialFields. It's annotated as so: @OneToMany(fetch = FetchType.EAGER, cascade = { CascadeType.ALL }, orphanRemoval = true) @JoinTable(blah blah) private final List<Field> initialFields; Now I'm trying to use Projections in order to only pull certain fields for performance reasons, but when doing so the initialFields field is always null. It's the only OneToMany field and the only field I am trying to retrieve with the projection that is behaving this way. If I use a regular HQL query initialFields is populated appropriately, but of course I can't limit the fields. Anyone ever seen anything like this?

    Read the article

  • Surface Detection in 2d Game?

    - by GamiShini
    I'm working on a 2D Platform game, and I was wondering what's the best (performance-wise) way to implement Surface (Collision) Detection. So far I'm thinking of constructing a list of level objects constructed of a list of lines, and I draw tiles along the lines. ( http://img375.imageshack.us/img375/1704/lines.png ). I'm thinking every object holds the ID of the surface that he walks on, in order to easily manipulate his y position while walking up/downhill. Something like this: //Player/MovableObject class MoveLeft() { this.Position.Y = Helper.GetSurfaceById(this.SurfaceId).GetYWhenXIs(this.Position.X) } So the logic I use to detect "droping/walking on surface" is a simple point (player's lower legs)-touches-line (surface) check (with some safety approximation - let`s say 1-2 pixels over the line). Is this approach OK? I`ve been having difficulty trying to find reading material for this problem, so feel free to drop links/advice.

    Read the article

  • P/Invoke or C++/CLI for wrapping a C library

    - by Ian G
    Have a moderate size (40-odd function) C API that needs to be called from a C# project. The functions logically break up to form a few classes that will be API presented to the rest of the project. Are there any objective reasons to prefer P/Invoke or C++/CLI for the interoperability underneath that API, in terms of robustness, maintainability, deployment, ...? The issues I could think of that might be, but aren't problematic are: C++/CLI will require an separate assembly, the P/Invoke classes can be in the main assembly. (We've already got multiple assemblies and there'll be the C dlls anyway so not a major issue). Performance doesn't seem differ noticeable between the two methods. Issues that I'm not sure about are: My feeling is C++/CLI will be easier to debug if there's inter-op problem, is this true? Language familiarity enough people know C# and C++ but knowledge of details of C++/CLI are rarer here. Anything else?

    Read the article

  • LdapErr: DSID-0C0903AA, data 52e: authenticating against AD '08 with pam_ldap

    - by Stefan M
    I have full admin access to the AD '08 server I'm trying to authenticate towards. The error code means invalid credentials, but I wish this was as simple as me typing in the wrong password. First of all, I have a working Apache mod_ldap configuration against the same domain. AuthType basic AuthName "MYDOMAIN" AuthBasicProvider ldap AuthLDAPUrl "ldap://10.220.100.10/OU=Companies,MYCOMPANY,DC=southit,DC=inet?sAMAccountName?sub?(objectClass=user)" AuthLDAPBindDN svc_webaccess_auth AuthLDAPBindPassword mySvcWebAccessPassword Require ldap-group CN=Service_WebAccess,OU=Groups,OU=MYCOMPANY,DC=southit,DC=inet I'm showing this because it works without the use of any Kerberos, as so many other guides out there recommend for system authentication to AD. Now I want to translate this into pam_ldap.conf for use with OpenSSH. The /etc/pam.d/common-auth part is simple. auth sufficient pam_ldap.so debug This line is processed before any other. I believe the real issue is configuring pam_ldap.conf. host 10.220.100.10 base OU=Companies,MYCOMPANY,DC=southit,DC=inet ldap_version 3 binddn svc_webaccess_auth bindpw mySvcWebAccessPassword scope sub timelimit 30 pam_filter objectclass=User nss_map_attribute uid sAMAccountName pam_login_attribute sAMAccountName pam_password ad Now I've been monitoring ldap traffic on the AD host using wireshark. I've captured a successful session from Apache's mod_ldap and compared it to a failed session from pam_ldap. The first bindrequest is a success using the svc_webaccess_auth account, the searchrequest is a success and returns a result of 1. The last bindrequest using my user is a failure and returns the above error code. Everything looks identical except for this one line in the filter for the searchrequest, here showing mod_ldap. Filter: (&(objectClass=user)(sAMAccountName=ivasta)) The second one is pam_ldap. Filter: (&(&(objectclass=User)(objectclass=User))(sAMAccountName=ivasta)) My user is named ivasta. However, the searchrequest does not return failure, it does return 1 result. I've also tried this with ldapsearch on the cli. It's the bindrequest that follows the searchrequest that fails with the above error code 52e. Here is the failure message of the final bindrequest. resultcode: invalidcredentials (49) 80090308: LdapErr: DSID-0C0903AA, comment: AcceptSecurityContext error, data 52e, v1772 This should mean invalid password but I've tried with other users and with very simple passwords. Does anyone recognize this from their own struggles with pam_ldap and AD? Edit: Worth noting is that I've also tried pam_password crypt, and pam_filter sAMAccountName=User because this worked when using ldapsearch. ldapsearch -LLL -h 10.220.100.10 -x -b "ou=Users,ou=mycompany,dc=southit,dc=inet" -v -s sub -D svc_webaccess_auth -W '(sAMAccountName=ivasta)' This works using the svc_webaccess_auth account password. This account has scan access to that OU for use with apache's mod_ldap.

    Read the article

  • database design suggesion

    - by Bharanikumar
    Hi , am going to start new travel site, I want some advise from guru's regarding database design , Things coming to picture are, Book taxi online , This is the core idea, So i like to implement lot of jquery,ajax stuff in my site , Main thing site must run veryt fast,safe,security, In mysql , which typw shall i use, MYISAM OR INNODB Which is best type for ajax works, fast,safe ,secure ,performance view . This is my demo site, Just look this site, i implemented some ajax stuff here, my-url In this site please choose the postcode in the taxifrom tab, It ask you value please enter, just enter nw7 , See How long it will take for response,some time no response and system goes to hang or idle mode, Also please look the diversion , select No diversion, There you will list of textbox, enter the nw3 then hit the search icon , See after 80seconds only , you will get response from DB, See this too bad response ... This is DB , my Database type if myisam ,no idexing , no fulltext and nothing...no constraints, So please advise me , which database type i choose, Myisam or innodb, Thanks Bharanikumar

    Read the article

  • Copying data from STDOUT to a remote machine using SFTP

    - by freddie
    In order to backup large database partitions to a remote machine using SFTP, I'd like to use the databases dump command and send it directly over using SFTP to a remote location. This is useful when needing to dump large data sets when you don't have enough local disk space to create the backup file, and then copy it to a remote location. I've tried using python + paramiko which provides this functionality, but the performance much worse than using the native openssh/sftp binary to transfer files. Does anyone have any idea on how to do this either with the native sftp client on linux, or some library like paramiko? (but one that performs close to the native sftp client)?

    Read the article

  • How to organise a php based website

    - by bsandrabr
    I am putting my php /mysql website up and this is my scenario The users are grouped into sites each site with their own unique database. There will be about 40 users per site. the two options I'm trying to decide between are have a central website running the php and directing the users off to their own database using sub domains for each user each with their own php in htdocs I dont even know if 2 is possible/stupid but if it was, would it make any difference to performance as they're all being run by the same server. Any other ideas/ advice much appreciated as I want to organise it the best way from the start

    Read the article

  • asp.net membership provider api. usability. best-practice

    - by Andrew Florko
    Hello everybody, Membership/Role/Profile providers API appeared in early days of asp.net Nearly everytime I can't live with standard API & have to add some extra functionality (for sorting, retrieving e.t.c.). I also have to use different database structure often (with foreign key to some tables for example) or think about performance improvements. These considerations forced teams I took part in to build own providers but I can't stand to implement providers API (because we don't use 70% of standard functionality at least). Moreover, providers that were built for exact projects were rarely reused. I wonder if someone found swiss-knife early-days-API providers implementation that is usefull for any kind of project without refactoring... Or do you use your own implementations of early-days-API's Or may be you abandon standard architecture and use lightweight implementations ? Thank you in advance

    Read the article

  • "Out of Memory" error in Lotus Notes automation from VBA

    - by PowerUser
    This VBA function sporadically fails with a Notes automation error "Run-Time Error '7' Out of Memory". Naturally, when I try to manually reproduce it, everything runs fine. Function ToGMT(ByVal X As Date) As Date Static NtSession As NotesSession If NtSession Is Nothing Then Set NtSession = New NotesSession NtSession.Initialize End If (do stuff) End function To put this in context, this VBA function is being called by an Access query, 3-4 times per record, with 20,000 records. For performance reasons, the NotesSession has been made static. Any ideas why it is sporadically giving an out-of-memory error? (Also, I'm initiating the NotesSession just so I can convert a datetime to GMT using Lotus's rules. If you know a better way, I'm listening).

    Read the article

  • Will GTK's pango and cairo work well in Cocoa and MFC applications.

    - by Lothar
    I'm writing a GUI program and decided to go native on all platforms. But for all the stuff i need to draw myself i would like to use the same drawing routines because font and unicode handling is so difficult and complex. Do you see any negative points in useing Pango/Cairo. Well on MacOSX i havent succeded installing Pango/Cairo yet. Looks like a bad Omen. I would also like to hear about the performance penality. The first time i looked at Pango i thought, yes thats the reason why Software is still getting despite better hardware.

    Read the article

< Previous Page | 473 474 475 476 477 478 479 480 481 482 483 484  | Next Page >