Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 115/184 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • array_merge vs array_value for resetting array index

    - by Jamex
    I have 1 array that I want to re-index. I have found that both array_values and array_merge functions can do the job (and I don't need 2 arrays for the array_merge function to work). Which is faster for a very large array? I would benchmark this, but I don't know how and don't have the large array yet. Before re-index: Array ( [0] => AB [4] => EA [6] => FA [9] => DA [10] => AF ) After re-index: Array ( [0] => AB [1] => EA [2] => FA [3] => DA [4] => AF )

    Read the article

  • Single Large v/s Multiple Small MySQL tables for storing Options

    - by Prasad
    Hi there, I'm aware of several question on this forum relating to this. But I'm not talking about splitting tables for the same entity (like user for example) Suppose I have a huge options table that stores list options like Gender, Marital Status, and many more domain specific groups with same structure. I plan to capture in a OPTIONS table. Another simple option is to have the field set as ENUM, but there are disadvantages of that as well. http://www.brandonsavage.net/why-you-should-replace-enum-with-something-else/ OPTIONS Table: option_id <will be referred instead of the name> name value group Query: select .. from options where group = '15' - Since this table is expected to be multi-tenant, the no of rows could grow drastically. - I believe splitting the tables instead of finding by the group would be easier to write & faster to execute. - or perhaps partitioning by the group or tenant? Pl suggest. Thanks

    Read the article

  • Changing <img src="XXX" />, js event when new image has finished loading?

    - by carillonator
    I have a photo gallery web page where a single <img src="XXX" /> element's src is changed (on a click) with javascript to show the next image—a poor man's ajax I guess. Works great on faster connections when the new image appears almost immediately. Even if it takes a few seconds to load, every browser I've tested it on keeps the old image in place until the new one is completely loaded. It's a little confusing waiting those few seconds on a slow connection, though, and I'm wondering if there's some javascript event that fires when the new image is done loading, allowing me to put a little working... animated gif or something up in the meantime. I know I could use AJAX for real (I'm using jQuery already), but this is such a nice and simple solution. Besides this lag, is there any other reason I should stay away from this approach to changing images? thanks.

    Read the article

  • SQL Server indexed view matching of views with joins not working

    - by usr
    Does anyone have experience of when SQL Servr 2008 R2 is able to automatically match indexed view (also known as materialized views) that contain joins to a query? for example the view select dbo.Orders.Date, dbo.OrderDetails.ProductID from dbo.OrderDetails join dbo.Orders on dbo.OrderDetails.OrderID = dbo.Orders.ID cannot be automatically matched to the same exact query. When I select directly from this view ith (noexpand) I actually get a much faster query plan that does a scan on the clustered index of the indexed view. Can I get SQL Server to do this matching automatically? I have quite a few queries and views... I am on enterprise edition of SQL Server 2008 R2.

    Read the article

  • How to optimize this Linq query

    - by Luke101
    I am trying to optimize this query. It is not slow for now but I am expecting a spike to hit this query in the near future. Is there anything else i can do to make this query faster? var posts = from p in context.post where p.post_isdeleted == false && p.post_parentid == null select new { p.post_date, p.post_id, p.post_titleslug, p.post_title, p.post_descriptionrender, p.userinfo.user_username, p.userinfo.user_userid, p.userinfo.user_GravatarHash, p.userinfo.user_points, p.category.catid, p.category.name, p.post_answercount, p.post_hasbestanswer, p.post_hits, p.post_isanonymous, p.post_votecount, FavoriteCount = context.favorites.Where(x => x.post.post_id == p.post_id).Count(), tags = from tg in context.posttag where tg.posttag_postid == p.post_id select new { tg.tag.tag_id, tg.tag.tag_title } };

    Read the article

  • How do I decrease first load time in ASP.net MVC 2?

    - by Bill
    I have an ASP.net MVC 2 application that runs well locally. However when I move the files to my production server, I get a first time lag of about 30 seconds, I assume this is a first compile. After that the application works fine. Then after about 20-30 minutes of non use, the applications takes another 30 seconds or so to load. I did try to precompile the code, but there is still a lag during the first load. Are there any trick to getting the application to work faster on the first load? I am using ASP.net 3.5, IIS 6 , visual studio 2010, MVC 2. Thanks

    Read the article

  • Find and replace text in a string using C#

    - by Joey Morani
    Anyone know how I would find & replace text in a string? Basically I have two strings: string firstS = "/9j/4AAQSkZJRgABAQEAYABgAAD/2wBDABQODxIPDRQSERIXFhQYHzMhHxwcHz8tLyUzSkFOTUlBSEZSXHZkUldvWEZIZoxob3p9hIWET2ORm4+AmnaBhH//2wBDARYXFx8bHzwhITx/VEhUf39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f3//"; string secondS = "abcdefg2wBDABQODxIPDRQSERIXFh/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/abcdefg"; I want to search firstS to see if it contains any sequence of characters that's in secondS and then replace it. It also needs to be replaced with the number of replaced characters in squared brackets: [NUMBER-OF-CHARACTERS-REPLACED] For example, because firstS and secondS both contain "2wBDABQODxIPDRQSERIXFh" and "/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/f39/" they would need to be replaced. So then firstS becomes: string firstS = "/9j/4AAQSkZJRgABAQEAYABgAAD/[22]QYHzMhHxwcHz8tLyUzSkFOTUlBSEZSXHZkUldvWEZIZoxob3p9hIWET2ORm4+AmnaBhH//2wBDARYXFx8bHzwhITx/VEhUf39[61]f3//"; Hope that makes sense. I think I could do this with Regex, but I don't like the inefficiency of it. Does anyone know of another, faster way?

    Read the article

  • Endianness manipulation - is there a C library for this?

    - by Malvineous
    Hi all, With the sort of programs I write (working with raw file data) I often need functions to convert between big and little endian. Usually I write these myself (which is covered by many other posts here) but I'm not that keen on doing this for a number of reasons - the main one being lack of testing. I don't really want to spend ages testing my code in a big endian emulator, and often just omit the code for big endian machines altogether. I also would rather make use of faster functions provided by various compilers, while still keeping my programs cross-platform. The only things I can find are socket calls like htons() but they require different #include files on each platform, and some GPL code like this, however that particular file, while comprehensive, seems to miss out on some of the high performance functions provided by some compilers. So, does anyone know of a library (ideally just a .h file) that is well tested and provides a standard set of functions for dealing with endianness across many compilers and platforms?

    Read the article

  • Python IPC, popen too slow

    - by UnableToLoad
    i need to run a subprocess (./myProgram) form python script and get output, actually i do this: import subprocess proc = subprocess.Popen('./generate_out', shell=False, stdout=subprocess.PIPE, ) while proc.poll() is None: out = proc.stdout.readline() data = doStuff(out) print(data) but is slow, sometimes pass a lot of time between the output produced by ./generate_out and the print(data), knowing that my doStuff() function is very fast, i think there is some buffer slowing down my pipe... Notes: ./generate_out, generates potentially an unlimited number of lines of finite length each. It seems that when too few chars are put in the pipe between the two processes nothing happens, then when enough is produced i get a huge print (non the expected behaviour!) sometimes i wait many seconds (10-20 and more) between generate_out print and python print) what can i do? maybe communicate() is faster? anithing else? Thank you a lot!

    Read the article

  • Inspecting a Word mail merge data source programmatically

    - by Guy Marom
    I want to iterate over all rows of a MS-Word mail merge data source and extract the relevant data into an XML. I'm currently using this code: Imports Microsoft.Office.Interop Do objXW.WriteStartElement("Recipient") Dim objDataFields As Word.MailMergeDataFields = DataSource.DataFields For Each FieldIndex As Integer In mdictMergeFields.Keys strValue = objDataFields.Item(FieldIndex).Value If Not String.IsNullOrEmpty(strValue) Then strName = mdictMergeFields(FieldIndex) objXW.WriteElementString(strName, strValue) End If Next objXW.WriteEndElement() If DataSource.ActiveRecord = LastRecord Then Exit Do Else DataSource.ActiveRecord = Word.WdMailMergeActiveRecord.wdNextDataSourceRecord End If Loop And it turns out to be a little sluggish (About 1 second for each row). Is there any way to do it faster? My fantasy is finding a function like MailMergeDataSource.ToDatatable and then inspecting the datatable.

    Read the article

  • Redirecting user from an old web to another in Sharepoint

    - by BeraCim
    Hi all: I want to add the ability to redirect users when they visit certain old sites. The URL of the old sites are unknown, but the server name is the same, e.g. Old site url: http://sharepoint/mySite/default.aspx New site url: http://sharepoint/myNewSite/... There are a lot of other pages within mySite, most of which must be redirected to the new site, but there are exceptions (e.g. user can still view the documents within the site). I thought I have to do this programmatically by somehow capturing the http request and read the url myself. Being no sharepoint guru, I had a quick google, and found that writing a web part is prehaps the best alternative for my situation. But I'm just wondering, given my situation and needs, is writing a web part, or something programmatically, really the ideal solution? Or is there a faster and quicker way to achieve what I want using Sharepoint? Thanks.

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • Can I use Duff's Device on an array in C?

    - by Ben Fossen
    I have a loop here and I want to make it run faster. I am passing in a large array. I recently heard of Duff's Device can it be applied to this for loop? any ideas? for (i = 0; i < dim; i++) { for (j = 0; j < dim; j++) { dst[RIDX(dim-1-j, i, dim)] = src[RIDX(i, j, dim)]; } }

    Read the article

  • Replacing Text Nodes With DOM Nodes

    - by Greg
    Hey, say I have a text node via XPath. How would I replace the text node with a new DOM node? For example, this little patch of code will go through text nodes, and if text matches something, it will replace it with a corresponding image via img element. I wanted something faster then a global page regex or even a element innerHTML regex. Any help would be appreciated. EDIT: Never mind. I figured it out.

    Read the article

  • Microsoft SQL Server 2008 - 99% fragmentation on non-clustered, non-unique index

    - by user550441
    I have a table with several indexes (defined below). One of the indexes (IX_external_guid_3) has 99% fragmentation regardless of rebuilding/reorganizing the index. Anyone have any idea as to what might cause this, or the best way to fix it? We are using Entity Framework 4.0 to query this, the EF queries on the other indexed fields about 10x faster on average then the external_guid_3 field, however an ADO.Net query is roughly the same speed on both (though 2x slower than the EF Query to indexed fields). Table id(PK, int, not null) guid(uniqueidentifier, null, rowguid) external_guid_1(uniqueidentifier, not null) external_guid_2(uniqueidentifier, null) state(varchar(32), null) value(varchar(max), null) infoset(XML(.), null) -- usually 2-4K created_time(datetime, null) updated_time(datetime, null) external_guid_3(uniqueidentifier, not null) FK_id(FK, int, null) locking_guid(uniqueidentifer, null) locked_time(datetime, null) external_guid_4(uniqueidentifier, null) corrected_time(datetime, null) is_add(bit, not null) score(int, null) row_version(timestamp, null) Indexes PK_table(Clustered) IX_created_time(Non-Unique, Non-Clustered) IX_external_guid_1(Non-Unique, Non-Clustered) IX_guid(Non-Unique, Non-Clustered) IX_external_guid_3(Non-Unique, Non-Clustered) IX_state(Non-Unique, Non-Clustered)

    Read the article

  • ASP.NET MVC: run code after view has rendered (close db transaction)

    - by thermal7
    Hi, I am using ASP.NET MVC2 with NHibernate, but am facing an issue. All calls to the database via NHibernate should be inside a transaction, however code inside the view kicks off database calls in some instances. Thus there is a need to be able to commit the transaction after the view has rendered. For example displaying a list of users and their user roles you might show the user role using this code: <%: Model.UserRole.Name % This will cause a hit on the database as the UserRole is loaded using a NHibernate proxy. You can fetch the UserRole eagerly which circumvents the issue in this case, but there are cases where it is much faster to use lazy loading. Anyway, is there a way to run code after a view has rendered?

    Read the article

  • Eager fetching of children with JDO (Datanucleus)

    - by Jan
    Hi, can JDO fetch all children of a database model at once? Like: class Parent { @Persistent(mappedBy="parent") private Set<Children> children; } class Children { @Persistent private Parent parent; @Persistent private String name; } In my case, I have a large number of Parents which I fetch at once. Accessing their children then takes a lot of time because they are fetched lazily. Does JDO (Datanucleus) support their fetching at once, togehter with the Parents? I also tried to fetch all Children independantly with another quey and put them into the Level2 cache afterwards, but still they are fetched (maybe jdo doesn't know about their relationship? Because the ForeignKey (parent-id) hasn't been fetched at first?) Any ideas how to read the data structure faster? Cheers, Jan

    Read the article

  • How can I easily run different configurations in Eclipse?

    - by Roman
    I have an Java applications which I would like to run with different values of input parameters (specified in the command line). In "Run - Run Configurations" I have created different configurations corresponding to different values of the input arguments. I can run these configurations in the same way (throw "Run - Run Configurations"). But in these case I have to perform to many actions (clicks) to run a particular configuration. Is there a easier (faster) way to do that? For example I expect that I can do it throw "Run - Run as" but in the drop-dawn menu of the "Run as" I see "(Not Applicable)".

    Read the article

  • Building Cocoa UIs for OS X with C# and Mono

    - by Antony Perkov
    Has anyone spent any time comparing the various Objective C bridges and associated Cocoa wrappers for Mono? I want to port an existing C# application to run on OS X. Ideally I'd run the application on Mono, and build a native Cocoa UI for it. I'm wondering which bridge would be the best choice. In case it's useful to anyone, here are some links to bridges I've found so far: CocoSharp - distributed with Mono on OS X - www.cocoa-sharp.com Monobjc - better documentation than the others (in my opinion) - www.mono-project.com/CocoaSharp and www.monobjc.net NObjective - (apparently) faster than the others - code.google.com/p/nobjective MObjc / MCocoa - code.google.com/p/mobjc and code.google.com/p/mcocoa ObjC# - www.mono-project.com/ObjCSharp

    Read the article

  • Creating a new project from a project skeleton using git

    - by asciitaxi
    In order to get a new django project up and running faster, I'd like to maintain a separate "project skeleton" on which I base all my new projects. It would be great if, as I improved the skeleton, I could bring those improvements into my active projects. How can I accomplish this with git? So, maybe in my remote git repository machine I would have 1 repo for each project and one for the skeleton? proj-A-repo proj-B-repo skeleton-repo If I want to create a new proj-C locally based on the skeleton, then push my local changes up to the remote server in a new repo called proj-C-repo, how might I do this? I've read through quite a bit of git documentation, but I'm confused about how to go about this. Do I need to clone the skeleton, or create an empty repo and then track a remote branch, or something else?

    Read the article

  • HTML form requirements specification

    - by Peder
    I am building a framework that will validate forms both client-side (javascript) and server-side based on a form requirements specification written in json. The purpose is to get rid of logically equivalent code on the server and client to make the code more maintainable, faster to write, and less buggy. The specification format may look something like: { '&lt;field_name>' : ['&lt;validation_function>', 'req', ['&lt;requirement>', &lt;param>], ...], ... } ( the requirement list is ordered so that the user can get most basic error messages first, the 'req' requirement must come first if it exists and means that the field is required) e.g.) { 'name' : ['string', 'req', ['min',6], ['max',150], ['match', /^[\sa-z0-9ÅÄÖåäö&]$/i], ['not_match', /^tmp_/]], 'email' : ['email', 'req'], 'email_confirm' : ['same_as', 'email'], 'password' : ['string', 'req', ['min', 6], ['max', 64], ['match', /^[a-z0-9\!@#\$%^&*_+.]$/i] ], } Does anyone know of a similar technology? I think the Rails validation framework solves the problem on the wrong level because I have found that forms often operate on more than one model.

    Read the article

  • Is it possible to use re2 from Python?

    - by flow
    i just discovered http://code.google.com/p/re2, a promising library that uses a long-neglected way (Thompson NFA) to implement a regular expression engine that can be orders of magnitudes faster than the available engines of awk, Perl, or Python. so i downloaded the code and did the usual sudo make install thing. however, that action had seemingly done little more than adding /usr/local/include/re2/re2.h to my system. there seemed to be some `*.a file in addition, but then what is it with this *.a extension? i would like to use re2 from Python (preferrably Python 3.1) and was excited to see files like make_unicode_groups.py in the distro (maybe just used during the build process?). those however were not deployed on my machine. how can i use re2 from Python?

    Read the article

  • How to run only the latest/a given test using Rspec?

    - by marcgg
    Let's say I have a big spec file with 20 tests because I'm testing a large model and I had no other way of doing it : describe Blah it "should do X" do ... end it "should do Y" do ... end ... it "should do Z" do ... end end Running a single file is faster than running the whole test suite, but it's still pretty long. Is there a way to run the last one (ie the one at the end of the file, here "should do Z")? If this is not possible, is there a way to specify which test I want to run in my file ?

    Read the article

  • How to use List<T>.Find() on a simple collection that does not implement Find()?

    - by Bilal
    Hi, I want to use List.Find() on a simple collection that does not implement Find(). The naive way I thought of, is to just wrap it with a list and execute .Find(), like this: ICollection myCows = GetAllCowsFromFarm(); // whatever the collection impl. is... var steak = new List<Cow>(myCows).Find(moo => moo.Name == "La Vache qui Rit"); Now, 1st of all I'd like to know, C#-wise, what is the cost of this wrapping? Is it still faster to 'for' this collection the traditional way? Second, is there a better straightforward way elegantly use that .Find()? Cheers!

    Read the article

  • AES acceleration for Java

    - by chris_l
    I want to encrypt/decrypt lots of small (2-10kB) pieces of data. The performance is ok for now: On a Core2Duo, I get about 90 MBytes/s AES256 (when using 2 threads). But I may need to improve that in the future - or at least reduce the impact on the CPU. Is it possible to use dedicated AES encryption hardware with Java (using JCE, or maybe a different API)? Would Java take advantage of special CPU features (SSE5?!), if I get a better CPU? Or are there faster JCE providers? (I tried SunJCE and BouncyCastle - no big difference.) Other possiblilities?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >