Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 671/1257 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • Best practices for implementing an Access (2007) application

    - by waanders
    Hello, Where can I find an overview (website) of best practices for implementing an Access (2007) application (with a FE/BE architecture) regarding to security, performance and maintainability? I know about designing tables, queries, forms and so on and I'm a reasonable programmer, but I'm wondering what's the "best" and most efficient way to implement my "application". Thanks in advance for your help.

    Read the article

  • Choosing between WPF and Silverlight

    - by user43498
    Hi, We have an existing web application developed using ASP.NET/Ajax We are planning to move it to either WPF or Silverlight. Can someone please compare these 2 technologies with respect to productivity,performance, maintainability,trade-offs, their pros and cons etc ? Thanks for reading.

    Read the article

  • Is ArrayList.size() method cached?

    - by Peterdk
    I was wondering, is the size() method that you can call on a existing ArrayList<T> cached? Or is it preferable in performance critical code that I just store the size() in a local int? I would expect that it is indeed cached, when you don't add/remove items between calls to size(). Am I right?

    Read the article

  • Update thousands of records in a DataSet to SQL Server

    - by MSIL
    I have half a million records in a data set of which 50,000 are updated. Now I need to commit the updated records back to the SQL Server 2005 Database. What is the best and efficient way to do this considering the fact that such updates could be frequent (though concurrency is not an issue but performance is)

    Read the article

  • Finding contained bordered regions from Excel imports.

    - by dmaruca
    I am importing massive amounts of data from Excel that have various table layouts. I have good enough table detection routines and merge cell handling, but I am running into a problem when it comes to dealing with borders. Namely performance. The bordered regions in some of these files have meaning. Data Setup: I am importing directly from Office Open XML using VB6 and MSXML. The data is parsed from the XML into a dictionary of cell data. This wonks wonderfully and is just as fast as using docmd.transferspreadsheet in Access, but returns much better results. Each cell contains a pointer to a style element which contains a pointer to a border element that defines the visibility and weight of each border (this is how the data is structured inside OpenXML, also). Challenge: What I'm trying to do is find every region that is enclosed inside borders, and create a list of cells that are inside that region. What I have done: I initially created a BFS(breadth first search) fill routine to find these areas. This works wonderfully and fast for "normal" sized spreadsheets, but gets way too slow for imports into the thousands of rows. One problem is that a border in Excel could be stored in the cell you are checking or the opposing border in the adjacent cell. That's ok, I can consolidate that data on import to reduce the number of checks needed. One thing I thought about doing is to create a separate graph that outlines the cells using the borders as my edges and using a graph algorithm to find regions that way, but I'm having trouble figuring out how to implement the algorithm. I've used Dijkstra in the past and thought I could do similar with this. So I can span out using no endpoint to search the entire graph, and if I encounter a closed node I know that I just found an enclosed region, but how can I know if the route I've found is the optimal one? I guess I could flag that to run a separate check for the found closed node to the previous node ignoring that one edge. This could work, but wouldn't be much better performance wise on dense graphs. Can anyone else suggest a better method? Thanks for taking the time to read this.

    Read the article

  • find and replace tokens in javascript

    - by Sourabh
    Hello, I have to do something like this string = " this is a good example to show" search = array {this,good,show} find and replace them with a token like string = " {1} is a {2} example to {3}" (order is intact) the string will undergo some processing and then string = " {1} is a {2} numbers to {3}" (order is intact) tokens are again replaced back to the string likem so that the string becomes string = " this is a good number to show" How should it be implemented so that the process is done at high performance ? Thanks in advance.

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Explain "Leader/Follower" Pattern

    - by Alex B
    I can't seem to find a good explanation of "Leader/Follower" pattern. All explanations either simply refer to it in the context of some problem, or are completely meaningless. Can anyone explain to the the mechanics of how this pattern works, and why and how it improves performance over more traditional asynchronous IO models? Examples and links to diagrams are appreciated too.

    Read the article

  • Where to go from PHP?

    - by dabito
    I'm a seasoned PHP programmer and I really like the way it works and find it very fun to work with (performance could be improved and some functions renamed, but nothing too serious). However, I took a java seminar and now Im very interested in using GWT for upcomming projects, although I think the learning curve can be steep. Should I really go through with this change (PHP JAVA)? Where to begin?

    Read the article

  • Good TFS Hosting Provider

    - by JonnyD
    I'm looking for a good 3rd party host for Team Foundation Server. Have any of you had good or bad experiences in the past? Will be working on a small .NET project with several other guys in different locations. Are there any performance problems or any other "gotchas" with 3rd party hosting?

    Read the article

  • OpenCL or CUDA Which way to go?

    - by holydiver
    I'm investigating ways of using GPU in order to process streaming data. I had two choices but couldn't decide which way to go? My criterias are as below: Ease of use.(good API) Community and Documentation. Performance Future I'll code in C and C++.

    Read the article

  • Rails callback for the equivalent of "after_new"

    - by Joe Cairns
    Right now I cant find a way to generate a callback between lines 1 and 2 here: f = Foo.new f.some_call f.save! Is there any way to simulate what would be effectively an after_new callback? Right now I'm using after_initialize but there are potential performance problems with using that since it fires for a lot of different events.

    Read the article

  • Correct way to read configuration file and using configuration values

    - by Harza
    I'm reading applications .config file using .NET ConfigurationManager like it should be done, but .... Which one is most preferred option: Read config and store instance of (build in or custom) ConfigurationElement for later use Read config and store only needed values (but not instances of ConfigrationElement classes) for later use Read ConfigurationElement from config always when configuration values are needed These two things are in my mind: Performance impact in case 3 when reading config all the time Problems occuring in case 1 when using cached instances of ConfigurationElements

    Read the article

  • SQL Server Update Group by

    - by Gerardo Abdo
    I'm trying to execute this on MS-SQL but returns me an error just at the Group by line update #temp Set Dos=Count(1) From Temp_Table2010 s where Id=s.Total and s.total in (Select Id from #temp) group by s.Total Do anyone knows how can I solve this problem having good performance.

    Read the article

  • Planning and coping with deadlines in SCRUM

    - by John
    From wikipedia: During each “sprint”, typically a two to four week period (with the length being decided by the team), the team creates a potentially shippable product increment (for example, working and tested software). The set of features that go into a sprint come from the product “backlog,” which is a prioritized set of high level requirements of work to be done. Which backlog items go into the sprint is determined during the sprint planning meeting. During this meeting, the Product Owner informs the team of the items in the product backlog that he or she wants completed. The team then determines how much of this they can commit to complete during the next sprint. During a sprint, no one is allowed to change the sprint backlog, which means that the requirements are frozen for that sprint. After a sprint is completed, the team demonstrates the use of the software. I was reading this and two questions immediately popped into my head: 1)If a sprint is only a couple of weeks, decided in a single meeting, how can you accurately plan what can be achieved? High-level tasks can't be estimated accurately in my experience, and can easily double what seems reasonable. As a developer, I hate being pushed into committing what I can deliver in the next month based on a set of customer requirements, this goes against everything I know about generating reliable estimates rather than having to roughly estimate and then double it! 2)Since the requirements are supposed to be locked and a deliverable product available at the end, what happens when something does take twice as long? What if this feature is only 1/2 done at the end of the sprint? The wiki article goes on to talk about Sprint planning, where things are broken down into much smaller tasks for estimation (<1 day) but this is after the Sprint features are already planned and the release agreed, isn't it? kind of like a salesman promising something without consulting the developers.

    Read the article

  • Data structure for an ordered set with many defined subsets; retrieve subsets in same order

    - by Aaron
    I'm looking for an efficient way of storing an ordered list/set of items where: The order of items in the master set changes rapidly (subsets maintain the master set's order) Many subsets can be defined and retrieved The number of members in the master set grow rapidly Members are added to and removed from subsets frequently Must allow for somewhat efficient merging of any number of subsets Performance would ideally be biased toward retrieval of the first N items of any subset (or merged subset), and storage would be in-memory (and maybe eventually persistent on disk)

    Read the article

  • I Cannot retrieve ARPINSTALLLOCATION so we know where to install a new version [WIX]

    - by Birkoff
    I am trying to retrieve the ARPINSTALLLOCATION during the installation of a Major Upgrade version of the software. Following this info I managed to set the ARPINSTALLLOCATION to the custom path. However, retrieving it again isn't working. I've tried many things over the past days but it keeps defaulting back to the default installation path instead of the custom one. <InstallUISequence> <AppSearch After="FindRelatedProducts"/> ... </InstallUISequence> <Property Id="WIXUI_INSTALLDIR" Value="APPROOTDIRECTORY"> <RegistrySearch Id="FindInstallLocation" Root="HKLM" Key="Software\Microsoft\Windows\CurrentVersion\Uninstall\[OLDERVERSIONBEINGUPGRADED]" Name="InstallLocation" Type="raw" /> </Property> In the custom WixUI_InstallDir UI I have this in the CustomInstallDirDlg <Control Id="Folder" Type="PathEdit" X="20" Y="90" Width="260" Height="18" Property="WIXUI_INSTALLDIR" Indirect="yes" /> The alternative install path is in the registry, but it isn't retrieved and shown in the control. What am I doing wrong here? -Birkoff

    Read the article

  • What is the best way to create continuously looping background in iPhone SDK ?

    - by catpad
    What is the best way to create a continuously looping background using iPhone SDK so that it seems the foreground object is in perpetual motion ? I have a background image which I want to move continuously at a given speed from right to left and seamlessly start displaying the beginning of the image when its end is reached. What is the best, most efficient way to do it to avoid any jumps and get optimal performance ?

    Read the article

  • Stored procedures vs. parameter binding

    - by Gagan
    I am using SQL server and ODBC in visual c++ for writing to the database. Currently i am using parameter binding in SQL queries ( as i fill the database with only 5 - 6 queries and same is true for retrieving data). I dont know much about stored procedures and I am wondering how much if any performance increase stored procedures have over parameter binding as in parameter binding we prepare the query only once and just execute it later in the program for diferent set of values of variables.

    Read the article

  • What IPC method should I use between Firefox extension and C# code running on the same machine?

    - by Rory
    I have a question about how to structure communication between a (new) Firefox extension and existing C# code. The firefox extension will use configuration data and will produce other data, so needs to get the config data from somewhere and save it's output somewhere. The data is produced/consumed by existing C# code, so I need to decide how the extension should interact with the C# code. Some pertinent factors: It's only running on windows, in a relatively controlled corporate environment. I have a windows service running on the machine, built in C#. Storing the data in a local datastore (like sqlite) would be useful for other reasons. The volume of data is low, e.g. 10kb of uncompressed xml every few minutes, and isn't very 'chatty'. The data exchange can be asynchronous for the most part if not completely. As with all projects, I have limited resources so want an option that's relatively easy. It doesn't have to be ultra-high performance, but shouldn't add significant overhead. I'm planning on building the extension in javascript (although could be convinced otherwise if really necessary) Some options I'm considering: use an XPCOM to .NET/COM bridge use a sqlite db: the extension would read from and save to it. The c# code would run in the service, populating the db and then processing data created by the service. use TCP sockets to communicate between the extension and the service. Let the service manage a local data store. My problem with (1) is I think this will be tricky and not so easy. But I could be completely wrong? The main problem I see with (2) is the locking of sqlite: only a single process can write data at a time so there'd be some blocking. However, it would be nice generally to have a local datastore so this is an attractive option if the performance impact isn't too great. I don't know whether (3) would be particularly easy or hard ... or what approach to take on the protocol: something custom or http. Any comments on these ideas or other suggestions? UPDATE: I was planning on building the extension in javascript rather than c++

    Read the article

  • Windows Azure: Parallelization of the code

    - by veda
    I have some matrix multiplication operation. I want to parallelize the execution of those operations through multiple processors.. This can be done on high performance computing cluster using MPI (Message Passing Interface). Like wise, can I do some parallelization in the cloud using multiple worker roles. Is there any means for doing that.

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >