Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 655/763 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Why is Lua considered a game language?

    - by Hoffmann
    I have been learning about Lua in the past month and I'm absolutely in love with the language, but all I see around that is built with lua are games. I mean, the syntax is very simple, there is no fuss, no special meaning characters that makes code look like regex, has all the good things about a script language and integrates so painlessly with other languages like C, Java, etc. The only down-side I saw so far is the prototype based object orientation that some people do not like (or lack of OO built-in). I do not see how ruby or python are better, surely not in performance ( http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=lua&lang2=python ). I was planning on writting a web app using lua with the Kepler framework and Javascript, but the lack of other projects that use lua as a web language makes me feel a bit uneasy since this is my first try with web development. Lua is considered a kids language, most of you on stackoverflow probably only know the language because of the WoW addons. I can't really see why that is... http://lua-users.org/wiki/LuaVersusPython this link provides some insights on Lua against Python, but this is clearly biased.

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • deepcopy and python - tips to avoid using it?

    - by blackkettle
    Hi, I have a very simple python routine that involves cycling through a list of roughly 20,000 latitude,longitude coordinates and calculating the distance of each point to a reference point. def compute_nearest_points( lat, lon, nPoints=5 ): """Find the nearest N points, given the input coordinates.""" points = session.query(PointIndex).all() oldNearest = [] newNearest = [] for n in xrange(nPoints): oldNearest.append(PointDistance(None,None,None,99999.0,99999.0)) newNearest.append(obj2) #This is almost certainly an inappropriate use of deepcopy # but how SHOULD I be doing this?!?! for point in points: distance = compute_spherical_law_of_cosines( lat, lon, point.avg_lat, point.avg_lon ) k = 0 for p in oldNearest: if distance < p.distance: newNearest[k] = PointDistance( point.point, point.kana, point.english, point.avg_lat, point.avg_lon, distance=distance ) break else: newNearest[k] = deepcopy(oldNearest[k]) k += 1 for j in range(k,nPoints-1): newNearest[j+1] = deepcopy(oldNearest[j]) oldNearest = deepcopy(newNearest) #We're done, now print the result for point in oldNearest: print point.station, point.english, point.distance return I initially wrote this in C, using the exact same approach, and it works fine there, and is basically instantaneous for nPoints<=100. So I decided to port it to python because I wanted to use SqlAlchemy to do some other stuff. I first ported it without the deepcopy statements that now pepper the method, and this caused the results to be 'odd', or partially incorrect, because some of the points were just getting copied as references(I guess? I think?) -- but it was still pretty nearly as fast as the C version. Now with the deepcopy calls added, the routine does it's job correctly, but it has incurred an extreme performance penalty, and now takes several seconds to do the same job. This seems like a pretty common job, but I'm clearly not doing it the pythonic way. How should I be doing this so that I still get the correct results but don't have to include deepcopy everywhere?

    Read the article

  • File transfer eating alot of CPU

    - by Dan C.
    I'm trying to transfer a file over a IHttpHandler, the code is pretty simple. However when i start a single transfer it uses about 20% of the CPU. If i were to scale this to 20 simultaneous transfers the CPU is very high. Is there a better way I can be doing this to keep the CPU lower? the client code just sends over chunks of the file 64KB at a time. public void ProcessRequest(HttpContext context) { if (context.Request.Params["secretKey"] == null) { } else { accessCode = context.Request.Params["secretKey"].ToString(); } if (accessCode == "test") { string fileName = context.Request.Params["fileName"].ToString(); byte[] buffer = Convert.FromBase64String(context.Request.Form["data"]); string fileGuid = context.Request.Params["smGuid"].ToString(); string user = context.Request.Params["user"].ToString(); SaveFile(fileName, buffer, user); } } public void SaveFile(string fileName, byte[] buffer, string user) { string DirPath = @"E:\Filestorage\" + user + @"\"; if (!Directory.Exists(DirPath)) { Directory.CreateDirectory(DirPath); } string FilePath = @"E:\Filestorage\" + user + @"\" + fileName; FileStream writer = new FileStream(FilePath, File.Exists(FilePath) ? FileMode.Append : FileMode.Create, FileAccess.Write, FileShare.ReadWrite); writer.Write(buffer, 0, buffer.Length); writer.Close(); }

    Read the article

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • ANTLR Tree Grammar and StringTemplate Code Translation

    - by Behrooz Nobakht
    I am working on a code translation project with a sample ANTLR tree grammar as: start: ^(PROGRAM declaration+) -> program_decl_tmpl(); declaration: class_decl | interface_decl; class_decl: ^(CLASS ^(ID CLASS_IDENTIFIER)) -> class_decl_tmpl(cid={$CLASS_IDENTIFIER.text}); The group template file for it looks like: group My; program_decl_tmpl() ::= << *WHAT?* >> class_decl_tmpl(cid) ::= << public class <cid> {} >> Based on this, I have these questions: Everything works fine apart from that what I should express in WHAT? to say that a program is simply a list of class declarations to get the final generated output? Is this approach averagely suitable for not so a high-level language? I have also studied ANTLR Code Translation with String Templates, but it seems that this approach takes much advantage of interleaving code in tree grammar. Is it also possible to do it as much as possible just in String Templates?

    Read the article

  • Simple imeplementation of admin/staff panel?

    - by Michael Mao
    Hi all: A new project requires a simple panel(page) for admin and staff members that : Preferably will not use SSL or any digital ceritification stuff, a simple login from via http will just be fine. has basic authentication which allows only admin to login as admin, and any staff member as of the group "staff". Ideally, the "credentials(username-hashedpassword pair)" will be stored in MySQL. is simple to configure if there is a package, or the strategy is simple to code. somewhere (PHP session?) somehow (include a script at the beginning of each page to check user group before doing anything?), it will detect any invalid user attempt to access protected page and redirect him/her to the login form. while still keeps high quality in security, something I worry about the most. Frankly I am having little knowledge about Internet security, and how modern CMS such as WordPress/Joomla do with their implementation in this. I only have one thing in my mind that I need to use a salt to hash the password (SHA1?) to make sure any hacker gets the username and password pair across the net cannot use that to log into the system. And that is what the client wants to make sure. But I really not sure where to start, any ideas? Thanks a lot in advance.

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Remove undesired indexed keywords from Sql Server FTS Index

    - by Scott
    Could anyone tell me if SQL Server 2008 has a way to prevent keywords from being indexed that aren't really relevant to the types of searches that will be performed? For example, we have the IFilters for PDF and Word hooked in and our documents are being indexed properly as far as I can tell. These documents, however, have lots of numeric values in them that people won't really be searching for or bring back meaningful results. These are still being indexed and creating lots of entries in the full text catalog. Basically we are trying to optimize our search engine in any way we can and assumed all these unnecessary entries couldn't be helping performance. I want my catalog to consist of alphabetic keywords only. The current iFilters work better than I would be able to write in the time I have but it just has more than I need. This is an example of some of the terms from sys.dm_fts_index_keywords_by_document that I want out: $1,000, $100, $250, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 129, 13.1, 14, 14.12, 145, 15, 16.2, 16.4, 18, 18.1, 18.2, 18.3, 18.4, 18.5 These are some examples from the same management view that I think are desirable for keeping and searching on: above, accordingly, accounts, add, addition, additional, additive Any help would be greatly appreciated!

    Read the article

  • c# web extracting programming, which libraries, examples samples please

    - by user287745
    I have just started programming and have made a few small applications in C and C#. My understanding is that programming for web and thing related to web is nowadays a very easy task. Please note this is for personnel learning, not for rent a coder or any money making. An application which can run on any Windows platform even Windows 98. The application should start automatically at a scheduled time and do the following. Connect to a site which displays stock prices summary (high low current open). Capture the data (excluding the other things in the site.) And save it to disk (an SQL database) Please note:- Internet connection is assumed to be there always. Do not want to know how to make database schema or database. The stock exchange has no law prohibiting the use of the data provided on its site, but I do not want to mention the name in case I am wrong, but it's for personal private use only. The data of summary of pricing is arranged in a table such that when copied pasted to MS Excel it automatically forms a table. need steps guidance please, examples, lbraries

    Read the article

  • Creating content for rails-based applications

    - by Matthias Hryniszak
    Hi, I'm facing a problem of cleaning up my application in Ruby on Rails. What I have is a pretty standard 3-panel, header and footer layout where different parts of the screen contain different functionality. By that I mean for example that the header contains (among others) a select that allows one to select parts of the application and a context-dependent menu. The main content area contains obviously the most interactive stuff whereas side panels contain quick-links with stuff like shopping-cart preview, list of potentially attractive products for the customer, a selector to narrow down the list of options... I was wondering how do I go about simplifying the design. Right now I have the stuff that provides data for the "common" stuff (as opposed to direct content that's placed in the center) called from all the actions (with a filter) but that doesn't feel right for me. I've read that "components" are also not the way to go for obvious performance reasons. Is there something that's more like component-oriented (other frameworks do have that kind of stuff - Grails: <ui:include ../>, ASP.NET MVC: <% Html.RenderAction() %>)? Best regards, Matthias.

    Read the article

  • SQL to get list of dates as well as days before and after without duplicates

    - by Nathan Koop
    I need to display a list of dates, which I have in a table SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 10, 2010 - 1 No problem. However, I now need to display the date before and the date after as well with a different DateType. Dec 31, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 I thought I could use a union SELECT MyDate, DateType FROM ( SELECT mydate - 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate + 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; ) AS myCombinedDateTable This however includes duplicates of the original dates. Dec 31, 2009 - 2 Jan 1, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 2 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 How can I best remove these duplicates? I am considering a temporary table, but am unsure if that is the best way to do it. This also appears to me that it may provide performance issues as I am running the same query three separate times. What would be the best way to handle this request?

    Read the article

  • Creating/Maintaining a large project-agnostic code library

    - by bufferz
    In order to reduce repetition and streamline testing/debugging, I'm trying to find the best way to develop a group of libraries that many projects can utilize. I'd like to keep individual executable relatively small, and have shared libraries for math, database, collections, graphics, etc. that were previously scattered among several projects and in many cases duplicated (bad!). This library is to be in an SVN repo and several programmers will be working on it. This library will be in constant development along with the executables that utilize it. For example, I want a code file in ProjectA to look something like the following: using MyCompany.Math.2D; //static 2D math methods using MyCompany.Math.3D; //static #D math methods using MyCompany.Comms.SQL; //static methods for doing simple SQLDB I/O using MyCompany.Graphics.BitmapOperations; //static methods that play with bitmaps So in my ProjectA solution file in VisualStudio, in order to develop/debug the MyCompany library I have to add several projects (Math, Comms, Graphics). Things get pretty cluttered and Solution files get out of date quickly between programmer SVN commits. I'm just looking for a high level approach to maintaining a large, shared code base in an SCN repository. I am fully willing to radically redesign my approach. I'm looking for that warm fuzzy feeling you get when you're design approach is spot on and development is fluid and natural. And ideas? Thanks!!

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • C# Execute Method (with Parameters) with ThreadPool

    - by washtik
    We have the following piece of code (idea for this code was found on this website) which will spawn new threads for the method "Do_SomeWork()". This enables us to run the method multiple times asynchronously. The code is: var numThreads = 20; var toProcess = numThreads; var resetEvent = new ManualResetEvent(false); for (var i = 0; i < numThreads; i++) { new Thread(delegate() { Do_SomeWork(Parameter1, Parameter2, Parameter3); if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }).Start(); } resetEvent.WaitOne(); However we would like to make use of ThreadPool rather than create our own new threads which can be detrimental to performance. The question is how can we modify the above code to make use of ThreadPool keeping in mind that the method "Do_SomeWork" takes multiple parameters and also has a return type (i.e. method is not void). Also, this is C# 2.0.

    Read the article

  • Using custom coordinates with QGraphicsScene

    - by Rob
    I am experimenting with a WYSIWYG editor that allows a user to draw shapes on a page and the Qt graphics scene support seems perfect for this. However, instead of working in pixels I want all my QGraphicsItem objects to work in tenths of a millimetre but I don't know how to achieve this. For example: // Create a scene that is the size if an A4 page (2100 = 21cm, 2970 = 29.7cm) QGraphicsScene* scene = new QGraphicsScene(0, 0, 2100, 2970); // Add a rectangle located 1cm across, 1cm down, 5cm wide and 2cm high QGraphicsItem* item = scene->addRect(100, 100, 500, 200); ... QGraphicsView* view = new QGraphicsView(scene); setCentralWidget(view); Now, when I display the scene above I want the shapes to appear at correct size for the screen DPI. Is this simply a case of using QGraphicsView::scale or do I have to do something more complicated? Note that if I was using a custom QWidget instead then I would use QPainter::setWindow and QPainter::setViewport to create a custom mapping mode but I can't see how to do this using the graphics scene support.

    Read the article

  • Using variables inside macros in SQL

    - by Tim
    Hello I'm wanting to use variables inside my macro SQL on Teradata. I thought I could do something like the following: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ DECLARE V_LAST_RUN_DATE TIMESTAMP; /* Get last run date and store in V_LAST_RUN_DATE */ SELECT LastDate INTO V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,V_LAST_RUN_DATE ,CURRENT_TIMESTAMP ); ); However, that didn't work, so I thought of this instead: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ CREATE VOLATILE TABLE MacroVars AS ( SELECT LastDate AS V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; ) WITH DATA ON COMMIT PRESERVE ROWS; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,SELECT V_LAST_RUN_DATE FROM MacroVars ,CURRENT_TIMESTAMP ); ); I can do what I'm looking for with a Stored Procedure, however I want to avoid for performance. Do you have any ideas about this? Is there anything else I can try? Cheers Tim

    Read the article

  • WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    HI! I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Fastest way to generate delimited string from 1d numpy array

    - by Abiel
    I have a program which needs to turn many large one-dimensional numpy arrays of floats into delimited strings. I am finding this operation quite slow relative to the mathematical operations in my program and am wondering if there is a way to speed it up. For example, consider the following loop, which takes 100,000 random numbers in a numpy array and joins each array into a comma-delimited string. import numpy as np x = np.random.randn(100000) for i in range(100): ",".join(map(str, x)) This loop takes about 20 seconds to complete (total, not each cycle). In contrast, consider that 100 cycles of something like elementwise multiplication (x*x) would take than one 1/10 of a second to complete. Clearly the string join operation creates a large performance bottleneck; in my actual application it will dominate total runtime. This makes me wonder, is there a faster way than ",".join(map(str, x))? Since map() is where almost all the processing time occurs, this comes down to the question of whether there a faster to way convert a very large number of numbers to strings.

    Read the article

  • How can I factor out repeated expressions in an SQL Query? Column aliases don't seem to be the ticke

    - by Weston C
    So, I've got a query that looks something like this: SELECT id, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%b %d %Y') as callDate, DATE_FORMAT(CONVERT_TZ(callTime,'+0:00','-7:00'),'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='999999-abc-blahblahblah' AND CONVERT_TZ(callTime,'+0:00','-7:00') >= '2010-04-25' AND CONVERT_TZ(callTime,'+0:00','-7:00') <= '2010-05-25' If you're like me, you probably start thinking that maybe it would improve readability and possibly the performance of this query if I wasn't asking it to compute CONVERT_TZ(callTime,'+0:00','-7:00') four separate times. So I try to create a column alias for that expression and replace further occurances with that alias: SELECT id, CONVERT_TZ(callTime,'+0:00','-7:00') as callTimeZoned, DATE_FORMAT(callTimeZoned,'%b %d %Y') as callDate, DATE_FORMAT(callTimeZoned,'%H:%i') as callTimeOfDay, SEC_TO_TIME(callLength) as callLength FROM cs_calldata WHERE customerCode='5999999-abc-blahblahblah' AND callTimeZoned >= '2010-04-25' AND callTimeZoned <= '2010-05-25' This is when I learned, to quote the MySQL manual: Standard SQL disallows references to column aliases in a WHERE clause. This restriction is imposed because when the WHERE clause is evaluated, the column value may not yet have been determined. So, that approach would seem to be dead in the water. How is someone writing queries with recurring expressions like this supposed to deal with it?

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >