Search Results

Search found 40310 results on 1613 pages for 'two factor'.

Page 327/1613 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Multi-part question about multi-threading, locks and multi-core processors (multi ^ 3)

    - by MusiGenesis
    I have a program with two methods. The first method takes two arrays as parameters, and performs an operation in which values from one array are conditionally written into the other, like so: void Blend(int[] dest, int[] src, int offset) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } The second method creates two separate sets of int arrays and iterates through them such that each array of one set is Blended with each array from the other set, like so: void CrossBlend() { int[][] set1 = new int[150][75000]; // we'll pretend this actually compiles int[][] set2 = new int[25][10000]; // we'll pretend this actually compiles for (int i1 = 0; i1 < set1.Length; i1++) { for (int i2 = 0; i2 < set2.Length; i2++) { Blend(set1[i1], set2[i2], 0); // or any offset, doesn't matter } } } First question: Since this apporoach is an obvious candidate for parallelization, is it intrinsically thread-safe? It seems like no, since I can conceive a scenario (unlikely, I think) where one thread's changes are lost because a different threads ~simultaneous operation. If no, would this: void Blend(int[] dest, int[] src, int offset) { lock (dest) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } } be an effective fix? Second question: If so, what would be the likely performance cost of using locks like this? I assume that with something like this, if a thread attempts to lock a destination array that is currently locked by another thread, the first thread would block until the lock was released instead of continuing to process something. Also, how much time does it actually take to acquire a lock? Nanosecond scale, or worse than that? Would this be a major issue in something like this? Third question: How would I best approach this problem in a multi-threaded way that would take advantage of multi-core processors (and this is based on the potentially wrong assumption that a multi-threaded solution would not speed up this operation on a single core processor)? I'm guessing that I would want to have one thread running per core, but I don't know if that's true.

    Read the article

  • "Could not register destruction callback" warn message leads to memory leaks?

    - by Séb
    Hello all, I'm in the exact same situation as this old question: http://stackoverflow.com/questions/2077558/warn-could-not-register-destruction-callback In short: I see a warning saying that a destruction callback could not be registered for some beans. My question is: since the beans whose destruction callback cannot be registered are two persistance beans, could this be the source of a memory leak? I am experiencing a leak in my app. Although the session timeout is set (to 30 minutes), my profiler shows me more instances of the hibernate SessionImpl each time I run a thread dump. The number of instances of SessionImpl is exactly the number of times I tried to login between two thread dumps. Thanks for your help...

    Read the article

  • Entity Relationship Diagramming

    - by Ben Aston
    I'd like to improve my understanding of cardinality constraints in ER diagrams. I have two entities: User Location But, I want the relationship between these two entities to be many-to-many (a user can be in many locations and a location can have many users). To do this I need to introduce an association class UserLocation. Is it correct to say I now have 3 entities? If I were to draw an ER diagam of the above, would I draw in the UserLocation entity, and would the cardinality look like this? User 1 ------ * User Location * ------ 1 Location

    Read the article

  • CALayer flickers when drawing a path

    - by Alexey
    I am using a CALayer to display a path via drawLayer:inContext delegate method, which resides in the view controller of the view that the layer belongs to. Each time the user moves their finger on the screen the path is updated and the layer is redrawn. However, the drawing doesn't keep up with the touches: there is always a slight lag in displaying the last two points of the path. It also flickers, but only while displaying the last two-three points again. If I just do the drawing in the view's drawRect, it works fine and the drawing is definitely fast enough. Does anyone know why it behaves like this? I suspect it is something to do with the layer buffering, but I couldn't find any documentation about it.

    Read the article

  • Entity Relationship Diagram - Relationship Strength?

    - by 01010011
    Hi, I am trying to figure out under what circumstances I should use weak (non-identifying) relationships (where the primary key of the related entity does not contain a primary key component of the parent entity), verses when I should use strong (identifying) relationships (primary key of the related entity contains a primary key component of the parent entity). For example, when designing an Entity Relationship Diagram , if I have two entities, (e.g. book and purchaser), how do I know when to choose the solid Crows Foot or the dashed Crows Foot to connect the two entities? Any assistance will be appreciated. Thanks in advance.

    Read the article

  • How to give wife emergency access to logins, passwords, etc.?

    - by Torben Gundtofte-Bruun
    I'm the digital guru in my household. My wife is good with email and forum websites but she trusts me with all our important digital stuff -- such as online banking and other things that require passwords, but also family photos and the plethora of other digital things in a modern home. We discuss relevant actions but it's always me that executes the actions. If I should get "hit by a bus" then my wife would be thoroughly stranded -- she would have no idea what digital stuff is where on our computer, how to access it, what online accounts we have, and their login credentials are. It would also leave my many public appearances (personal websites, email accounts, social networks, etc.) unresolved. To complicate things, I'm one of those people who don't use password as my password everywhere; I use a mix of SuperGenPass and LastPass, and also two-factor authentication whenever possible. I don't have much hope that she would find her way through a written explanation of all that in a stressful situation. I could just tell her that she should ask my tech-savvy twin brother and then entrust him with my LastPass master passphrase. I feel that would have a high chance of success, but it's inelegant and leaves my wife without control of the information. How can I ensure that my wife has access to my digital remains?

    Read the article

  • ActionController::MethodNotAllowed

    - by Lowgain
    I have a rails model called 'audioclip'. I orginally created a scaffold with a 'new' action, which I replaced with 'new_record' and 'new_upload', becasue there are two ways to attach audio to this model. Going to /audioclips/new_record doesn't work because it takes 'new_record' as if it was an id. Instead of changing this, I was trying to just create '/record_clip' and '/upload_clip' paths. So in my routes.db I have: map.record_clip '/record_clip', :controller => 'audioclips', :action => 'new_record' map.upload_clip '/upload_clip', :controller => 'audioclips', :action => 'new_upload' When I navigate to /record_clip, I get ActionController::MethodNotAllowed Only get, head, post, put, and delete requests are allowed. I'm not extremely familiar with the inner-workings of routing yet. What is the problem here? (If it helps, I have these two statements above map.resources = :audioclips

    Read the article

  • What really happens when I use varchar(10) in the sqlite command-line shell?

    - by romandas
    I'm messing around with SQLite for the first time by working through some of the SQLite documentation. In particular, I'm using Command Line Shell For SQLite and the SoupToNuts SQLite Tutorial on Sourceforge. According to the SQLite datatype documentation, there are only 5 datatypes in SQLite. However, in the two tutorial documents above, I see where the authors use commands such as create table tbl1(one varchar(10), two smallint); or create table t1 (t1key INTEGER PRIMARY KEY,data TEXT,num double,timeEnter DATE); which contain datatypes that aren't listed by SQLite, yet these commands work just fine. Additionally, when I ran .dump to see the SQL statements, these datatype specifications are preserved. So, what gives? Does SQLite keep a reference for any datatype specified in the SQL yet converts it behind the scenes to one of its 5 datatypes? Or is there something else I'm missing?

    Read the article

  • HTTP Upload Problems

    - by jfoster
    We are running a marketplace on ColdFusion8 and IIS with a widely geographically distributed user base and have been receiving complaints of issues with some HTTP uploads. Most of the complaints are coming from geographically distant locations from our main datacenter on the US east coast. I've attempted to upload the same 70MB file from a US West coast test server to both our main site and a backup running the same code on a different network route and I saw the same issues fairly consistently in both places, so I've ruled out the code, route, and internal network errors. I've also tested uploads using both the native cf upload tag and a third party tool called SaFileUp. I saw the same issues with both upload tools, so I also don't think this is necessarily a ColdFusion problem. I don't have any problems uploading the test file from the East coast to other east coast servers, so I'm beginning to think that the distance between our users and our equipment is a factor. I've also found that smaller files are more likely to succeed than large ones (< 10MB) I tried the test upload with both IE and FF and did notice a difference in the way that the browsers seemed to handle packet errors. IE seemed to have a tough time continuing an upload after dropped / bad packets, whereas FF seemed to have the ability to gracefully resume an upload after experiencing packet problems. Has anyone experienced similar issues? Is there anything we can do on our side to make uploads more forgiving to packet loss or resumable after an error? A different upload tool etc… Do we need upload servers in more than one location to shorten the network routes between clients and servers? Does anyone think that switching uploads to SSL will help (no layer7 packet sniffing may lead to a smoother upload). Thanks.

    Read the article

  • Is it worth learning Perl 6?

    - by Andres
    I have the opportunity to take a two day class on Perl 6 with the Rakudo Compiler. I don't want to start a religious war, but is it worth my time? Is there any reason to believe that Perl 6 will be practical in the real world within the next two years? Does anyone currently use it effectively? Update I took the class and learned a lot. However, after day 1, my mind was a bit overwhelmed. There are tons of cool ideas in perl 6, and it will be neat to see what filters up to other languages. Overall the experience was a positive use of my time, though I wasn't able to absorb as much on the second day. If it were a three day class it would have been unproductive just because there is a limit to how much you can process in a short amount of time.

    Read the article

  • Compute average distance from point to line segment and line segment to line segment

    - by Fred
    Hi everyone, I'm searching for an algorithm to calculate the average distance between a point and a line segment in 3D. So given two points A(x1, y1, z1) and B(x2, y2, z2) that represent line segment AB, and a third point C(x3, y3, z3), what is the average distance between each point on AB to point C? I'm also interested in the average distance between two line segments. So given segment AB and CD, what is the average distance from each point on AB to the closest point on CD? I haven't had any luck with the web searches I've tried, so any suggestions would be appreciated. Thanks.

    Read the article

  • SQL Joing on a one-to-many relationship

    - by Harley
    Ok, here was my original question; Table one contains ID|Name 1 Mary 2 John Table two contains ID|Color 1 Red 2 Blue 2 Green 2 Black I want to end up with is ID|Name|Red|Blue|Green|Black 1 Mary Y Y 2 John Y Y Y It seems that because there are 11 unique values for color and 1000's upon 1000's of records in table one that there is no 'good' way to do this. So, two other questions. Is there an efficient way to get this result? I can then create a crosstab in my application to get the desired result. ID|Name|Color 1 Mary Red 1 Mary Blue 2 John Blue 2 John Green 2 John Black If I wanted to limit the number of records returned how could I do something like this? Where ((color='blue') AND (color<>'red' OR color<>'green')) So using the above example I would then get back ID|Name|Color 1 Mary Blue 2 John Blue 2 John Black I connect to Visual FoxPro tables via ADODB. Thanks!

    Read the article

  • Style Switcher & Text Resizer Combined?

    - by Stephen
    Hi there, I've came across various style switchers that allow you to change the stylesheet (i.e. Light, Dark, High Contrast), and carious text-resizers that allow you to resize the test (usually with Three A's, small, medium and large). However, I can't seem to find a single switcher/resizer that works well together by allowing permutations of the two. i.e. so the user can choose a dark background with small text, or a dark background with large text, etc. I can only seem to get this working where the user can choose one or the other styles (large text or High Contrast, not a combination of the two). Any ideas on anything that may be suitable for this at all? Thanks, Stephen

    Read the article

  • Linux, static lib referring to other static lib within an executable

    - by andras
    Hello, I am creating an application, which consists of two static libs and an executable. Let's call the two static libs: libusefulclass.a libcore.a And the application: myapp libcore instantiates and uses the class defined in libusefulclass (let's call it UsefulClass) Now, if I link the application in the following way: g++ -m64 -Wl,-rpath,/usr/local/Trolltech/Qt-4.5.4/lib -o myapp src1.o src2.o srcN.o -lusefulclass -lcore The linker complains about the methods in libusefulclass not being found: undefined reference to `UsefulClass::foo()' etc. I found a workaround for this: If UsefulClass is also instantiated within the source files of the executable itself, the application is linked without any problems. My question is: is there a more clean way to make libcore refer to methods defined in libusefulclass, or static libs just cannot be linked against eachother? TIA P.S.: In case that matters: the application is being developed in C++ using Qt, but I feel this is not a Qt problem, but a library problem in general.

    Read the article

  • UISplitViewCOntroller + TabBarCOntroller + iPad

    - by tek3
    Hi all, I am developing a tab-based iPad application in which corresponding to each tab, I have to show an UISplitViewController . I have done this by adding two navigation controllers to my tabBarController and assigning a subclass of UISplitViewController as RootViewController of both navigationController. And also I have to show both viewControllers(Master and Detail) in both modes(Potrait and Landscape). For this I have constructed a subclass of UISplitViewController in which i am overriding willAnimateRotationToInterfaceOrientation method and setting the frame of both ViewControllers as demonstrated in this link. However i am not able to set both viewControllers correctly.If my app starts in Landscape mode everything displays fine but if i open it in Potrait mode then the orientation of both ViewControllers changes. Sometimes the MasterView occupies the entire screen or sometimes both ViewControllers appear leaving a black line between them and the navigationBar. I have been banging my head over this problem since two days without any success. kindly help..

    Read the article

  • IBM WebSphere vs Oracle Fusion

    - by Hal
    I have been asked to research the advantages/disadvantages the two application servers, but am new to the space and am having a terrible time finding an unbiased comparison of the two platforms. I understand that this is a broad question and I hate that I can't give a very specific use case (other than it will be an implementation in an organization with out a full time admin dedicated to management and it will be running in a mixed environment against JD Edwards/Oracle and SQLServer). Does anyone know of any (recently published) content that does a reasonable comparison or can any offer any insight into which might be the better choice and why. Any help would be greatly appreciated. Best regards, Hal

    Read the article

  • Javascript problems?

    - by jiexi
    I've been staring at the source code of these two download pages for awhile now and i can't seem to find the problem. I have two download pages, one where the javascript is working, one where it isnt. working: http://justupload.it/v/lfd7 not: http://justupload.it/v/ljhv The working one allows me to rotate the image and reveal the comments box by clicking on the comments button. The non-working one does not let me do that stuff. Can anyone help me find the problem or recommend a program to help me do it?

    Read the article

  • using Excel VBA, given the daily price of 50 stocks, choose 10 stocks such that they have the minumu

    - by correl
    The high-level goal is to choose 10 stocks that have the lowest correlation among one another, out of a pool of 50, so that I can have a well-diversified portfolio. I have managed to write some VBA macro to download the past 3 years of daily price data from Yahoo finance, and then compute the 50x50 correlation matrix (using the Correl function), using the daily close as the data. What I have tried so far is just some local-maximum heuristic: - For the two stocks that have the highest correlation with each other, remove one of them. Between the two, remove the one that has the higher average correlation with all the other stocks. - When I remove a stock from the pool, I just delete the correponding row and column, to give a smaller matrix. - Repeat until I have just 10 stocks remaining (a 10x10 matrix). I was wondering if there is some algorithm that already solves such a problem and gives the optimum solution?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Client side html markdown conversion

    - by DNN
    Hello, I've been trying to create a client side editor which allows the end user to create content in html or markdown. The user has two tabs for switching between the two. I managed to find some javascript that converts markdown to html, so if a user has been writing markdown and switches to the html tab, the html equivilant is shown. I haven't been able to find a javascript that converts html to markdown, only a python script. The python script is obviously server side. The tabs are just hyperlinks with script in there. Is there any way I can convert the markdown html when the user clicks the tab? The system has been created in python/django for reference Thanks

    Read the article

  • Distributing points over a surface within boundries

    - by vise
    I'm interested in a way (algorithm) of distributing a predefined number of points over a 4 sided surface like a square. The main issue is that each point has got to have a minimum and maximum proximity to each other (random between two predefined values). Basically the distance of any two points should not be closer than let's say 2, and a further than 3. My code will be implemented in ruby (the points are locations, the surface is a map), but any ideas or snippets are definitely welcomed as all my ideas include a fair amount of brute force.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >