Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 562/783 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • Using SqlBulkCopy in a multithread scenario with ThreadPool issue

    - by Ruben F.
    Hi. I'm facing a dilemma (!). In a first scenario, i implemented a solution that replicates data from one data base to another using SQLBulkCopy synchronously and i had no problem at all. Now, using ThreadPool, i implemented the same in a assynchronously scenario, a thread per table, and all works fine, but past some time (usualy 1hour because the operations of copy takes about the same time), the operations sended to the ThreadPool stop being executed. There are one diferent SQLBulkCopy using one diferent SQLConnection per thread. I already see the number of free threads, and they are all free at the begining of the invocation...I have one AutoResetEvent to wait that the threads finish their job before launching again, and a Semaphore FIFO that hold the counter of active threads. There are some issue that I have forgotten or that I should avaliate when using SqlBulkCopy? I apreciate some help, because my ideas are over;) Thanks

    Read the article

  • Java Audit table logging, MySQL equivalent of CONTEXT_INFO.

    - by Julia
    Hi, I am looking for the MySQL equivalent of CONTEXT_INFO that is present in SQL Server. Or any other session variable like thing using which I can pass the username to the trigger. I am currently working on logging table data for audit. I need to pass the username of the logged in user to the delete trigger. Any ideas? We are deleting the rows from the table in a few cases and marking them as deleted in others. Any alternate solutions are welcome. I thought of using AOP but it could prove problematic when deleting a cascade. I want to look into Hibernate Interceptors, not sure at this point if that works. If I can find the MySQL equivalent of CONTEXT_INFO, my job is done and elegant as well. Thanks, Julia.

    Read the article

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Java Concurrency: CAS vs Locking

    - by Hugo Walker
    Im currently reading the Book Java Concurrency in Practice. In the Chapter 15 they are speaking about the Nonblocking algorithms and the compare-and-swap (CAS) Method. It is written that the CAS perform much better than the Locking Methods. I want to ask the people which already worked with both of this concepts and would like to hear when you are preferring which of these concept? Is it really so much faster? Personally for me the usage of Locks is much clearer and easier to understand and maybe even better to maintain. (Please correct me if I am wrong). Should we really focus creating our concurrent code related on CAS than Locks to get a better performance boost or is sustainability a higher thing? I know there is maybe not a strict rule, when to use what. But I just would like to hear some opinions, experiences with the new concept of CAS.

    Read the article

  • Is there a format or service for resume/CV data?

    - by Ben Dauphinee
    I have noticed through the process of signing up for various freelance and job seeking or professional network sites that they all want your resume/CV data. And I am really getting tired of copy/pasting this data, especially since I have a website. Is there a standard format or service somewhere that I do not know about for this data? If not, does anyone want to help me build something like this out? I'm thinking a service similar to OpenID that allows you to maintain a central resume to have your data pulled from. No more filling in the same data over and over, and having to maintain the copies on any of the plethora of websites that have that data. Takers?

    Read the article

  • NameValueCollection vs Dictionary<string,string>

    - by frankadelic
    Any reason I should use Dictionary<string,string instead of NameValueCollection? (in C# / .NET Framework) Option 1, using NameValueCollection: //enter values: NameValueCollection nvc = new NameValueCollection() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach(string key in nvc.AllKeys) { string value = nvc[key]; // do something } Option 2, using Dictionary<string,string... //enter values: Dictionary<string, string> dict = new Dictionary<string, string>() { {"key1", "value1"}, {"key2", "value2"}, {"key3", "value3"} }; // retrieve values: foreach (KeyValuePair<string, string> kvp in dict) { string key = kvp.Key; string val = kvp.Value; // do something } For these use cases, is there any advantage to use one versus the other? Any difference in performance, memory use, sort order, etc.?

    Read the article

  • Nice name for `decorator' class?

    - by Lajos Nagy
    I would like to separate the API I'm working on into two sections: 'bare-bones' and 'cushy'. The idea is that all method calls in the 'cushy' section could be expressed in terms of the ones in the 'bare-bones' section, that is, they would only serve as convenience methods for the quick-and-dirty. The reason I would like to do this is that very often when people are beginning to use an API for the first time, they are not interested in details and performance: they just want to get it working. Anybody tried anything similar before? I'm particularly interested in naming conventions and organizing the code.

    Read the article

  • Storing hierarchical (parent/child) data in Python/Django: MPTT alternative?

    - by Parand
    I'm looking for a good way to store and use hierarchical (parent/child) data in Django. I've been using django-mptt, but it seems entirely incompatible with my brain - I end up with non-obvious bugs in non-obvious places, mostly when moving things around in the tree: I end up with inconsistent state, where a node and its parent will disagree on their relationship. My needs are simple: Given a node: find its root find its ancestors find its descendants With a tree: easily move nodes (ie. change parent) My trees will be smallish (at most 10k nodes over 20 levels, generally much much smaller, say 10 nodes with 1 or 2 levels). I have to think there has to be an easier way to do trees in python/django. Are there other approaches that do a better job of maintaining consistency?

    Read the article

  • Pdssql.dll cannot be found

    - by Kolten
    I am attempting to open a Crystal Reports 8.5 document, and when I try to set the database to the Production data server, i get the error "Pdssql.dll cannot be found". Googling, this is a common problem, but none of the fixes I tried seem to work. This is a new computer. I do have SQL Server 2008 client tools installed, but I believe previously I had Sql Server 2005 client tools. I attempted to install the SQL Server 2005 client tools, but that didn't go through due to me having 2008 installed. I require 2008 to do my job now. Everything I search for says this is a 16bit driver, and I need to install the 2005 client tools. Unfortunately this cannot be done due to me having 2008. is there some sort of work-around I can do? Thanks

    Read the article

  • Can I do video communication with silverlight 4.0?

    - by tom greene
    With silverlight 4.0, it is possible to show a live video of the user on the screen: Here is the code VideoBrush videoBrush = new VideoBrush(); CaptureSource captureSource = new CaptureSource { VideoCaptureDevice = CaptureDeviceConfiguration.GetAvailableVideoCaptureDevices().First() }; bool b = CaptureDeviceConfiguration.RequestDeviceAccess(); videoBrush.SetSource(captureSource); captureSource.Start(); myrect.Fill = videoBrush; However, I am looking at a way to show the video to someone else - seeing oneself on screen is not that interesting. Is it possible? Do I need my own server? Can I use clowd services to do the communication? Are there performance issues?

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • WinForms prints to default printer even if it's not available/connected

    - by Valentein
    How can I determine if printer is connected? Typically this application prints to the default printer but in some cases that printer may not be available. If so I don't want the job sent to it's queue but rather printed to another available printer. I understand the PinterSettings.InstalledPrinters property. Does PrintDocument.PrinterSettings.IsValid return false if a printer is not available? Does WPF provide this kind of functionality? My problem is different than Printing problem in C# windows app - Always prints to default printer

    Read the article

  • PyPy -- How can it possible beat CPython?

    - by Vulcan Eager
    From the Google Open Source Blog: PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython. Many years of hard work have finally paid off. Our speed results often beat CPython, ranging from being slightly slower, to speedups of up to 2x on real application code, to speedups of up to 10x on small benchmarks. How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score? (On a related note... why would anyone try something like this?)

    Read the article

  • Tricks to avoid losing motivation?

    - by AareP
    Motivation is a tricky thing to upkeep. Once I thought that ambitious projects will keep programmer motivated, and too simple tasks will hinder his motivation. Now I have plenty of experience with small and large projects, desktop/web/database programming, c++/c#/java/php languages, oop/non-oop paradigms, day-job/free-time programming.. but I still can't answer the question of motivation. Which programming tasks I like, and which don't? It seems to depend on too many variables. One thing remains constant though. It's that starting everything from scratch is always more motivating than extending some existing system. Unfortunately it's hard to use this trick in productive programming. :) So my question is, what tricks programmer can use to stay motivated? For example should we use pen and paper as much as possible, in order not to get fed up with monitor and keyboard?

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • grammar parser lexer antlr letteral

    - by BB
    What's the difference between this grammar: ... if_statement : 'if' condition 'then' statement 'else' statement 'end_if'; ... and this: ... if_statement : IF condition THEN statement ELSE statement END_IF; ... IF : 'if'; THEN: 'then'; ELSE: 'else'; END_IF: 'end_if'; .... ? If there is any difference, as this impacts on performance ... Thanks

    Read the article

  • Concatenate boost::dynamic_bitset or std::bitset

    - by MOnsDaR
    Hey, what is the best way to concatenate 2 bitsets? For example i've got boost::dynamic_bitset<> test1( std::string("1111") ); boost::dynamic_bitset<> test2( std::string("00") ); they should be concatenated into a thrid Bitset test3 which then holds 111100 Solutions should use boost::dynamic_bitset. If the solution works with std::bitset, it would be nice too. There should be a focus on performance when concatenating the bits.

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • How to aggregate over few types with linq?

    - by Shimmy
    Can someone help me translate the following to one liner: Dim items As New List(Of Object) For Each c In ph.Contacts items.Add(New With {.Type = "Contact", .Id = c.ContactId, .Title = c.Title}) Next For Each c In ph.Persons items.Add(New With {.Type = "Person", .Id = c.PersonId, .Title = c.Title}) Next For Each c In ph.Jobs items.Add(New With {.Type = "Job", .Id = c.JobId, .Title = c.Title}) Next Is it possible to merge them all into one query or method line, I don't really care if this will be done with something other than linq, I am just looking for a more efficient way as I have a long list coming ahead, and the aggregating list will be strongly-typed using Dim list = blah blah

    Read the article

  • Cross-platform (microcontroller-PC) algorithm development

    - by Kyr
    Hello people! I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance. The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway. Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas? Thx!

    Read the article

  • spring.net application scope repository object on loadbalanced application

    - by Bert Vandamme
    Hi, We have an application running on a loadbalanced environment, let say webserver A and B. The loadbalancing is on the HTTP level, so the loadbalancer directs each user request to one of both webservers. The scope of the repositories in the application is managed by the spring.net container, and the application relies on data that can be cached by the repository (performance reasons). In this case we can never be sure that the cached data in the repositories on both webservers is the same. Is there mechanism in spring.net that can manage this kind problem? Or is there another common approach for this kind of thing? Any ideas? Thx, Bert

    Read the article

  • Quick question regarding CSS sprites and memory usage

    - by Andy E
    Well, it's more to do with images and memory in general. If I use the same image multiple times on a page, will each image be consolidated in memory? Or will each image use a seperate amount of memory? I'm concerned about this because I'm building a skinning system for a Windows Desktop Gadget, and I'm looking at spriting the images in the default skin so that I can keep the file system looking clean. At the same time I want to try and keep the memory footprint to a minimum. If I end up with a single file containing 100 images and re-use that image 100 times across the gadget I don't want to have performance issues. Cheers.

    Read the article

  • Vsync in Flex/Flash/AS3?

    - by oshyshko
    I work on a 2D shooter game with lots of moving objects on the screen (bullets etc). I use BitmapData.copyPixels(...) to render entire screen to a buffer:BitmapData. Then I "copyPixels" from "buffer" to screen:BitmapData. The framerate is 60. private var bitmap:Bitmap = new Bitmap(); private var buffer:Bitmap = new Bitmap(); private function start():void { addChild(bitmap); } private function onEnterFrame():void { // render into "buffer" // copy "buffer" -> "bitmap" } The problem is that the sprites are tearing apart: some part of a sprite got shifted horizontally. It looks like a PC game with VSYNC turned off. Did anyone solve this problem? UPDATE: the question is not about performance, but about getting rid of screen tearing. [!] UPDATE: I've created another question and here you may try both implementations: using Flash way or BitmapData+copyPixels()

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >