Search Results

Search found 93649 results on 3746 pages for 'protector one'.

Page 650/3746 | < Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >

  • Required Working Precision for the BBP Algorithm?

    - by brainfsck
    Hello, I'm looking to compute the nth digit of Pi in a low-memory environment. As I don't have decimals available to me, this integer-only BBP algorithm in Python has been a great starting point. I only need to calculate one digit of Pi at a time. How can I determine the lowest I can set D, the "number of digits of working precision"? D=4 gives me many correct digits, but a few digits will be off by one. For example, computing digit 393 with precision of 4 gives me 0xafda, from which I extract the digit 0xa. However, the correct digit is 0xb. No matter how high I set D, it seems that testing a sufficient number of digits finds an one where the formula returns an incorrect value. I've tried upping the precision when the digit is "close" to another, e.g. 0x3fff or 0x1000, but cannot find any good definition of "close"; for instance, calculating at digit 9798 gives me 0xcde6 , which is not very close to 0xd000, but the correct digit is 0xd. Can anyone help me figure out how much working precision is needed to calculate a given digit using this algorithm? Thank you,

    Read the article

  • Using Git with shared (symbolic link) directories

    - by chmike
    I'm working on a big Qt application with multiple widgets which are quite complex. One of this widget is a webcam stream viewer. The application is organized so that each program module (i.e. widgets) is stored in its own directory with a .pri file. All these are stored in one main directory grouping all the widget directories. Next to this main project directory I also have application directories. Let say one for each widget. In these directory I have a symbolic link (alias on windows) to the module directory in the main project folder. This application has then the necessary code to build a standalone application showing only the widget. So for instance I have a webcam viewer application, another to control some devices, etc. This source code organization works well and allows me to develop and test the widgets in independent applications while sharing the code with the main application. Currently only the main project directory is under version control using subversion. Now I would like to start using git and would like to know if this shared directory model would work with it or if there is a better way to do it.

    Read the article

  • java distributed cache for low latency, high availability

    - by Shahbaz
    I've never used distributed caches/DHTs like memcached, jboss cache, ehcache, etc. I'm wondering which, if any, is appropriate for my use. First, I'm not doing web applications (as most of these project seem to be geared towards web apps). I write servers (Order Management Systems actually) for financial trading firms. The servers themselves are not too complicated. They need to receive information (market data, orders, executions, etc.) rout them to their destination while possibly transforming some of these messages. I am looking at these products to solve the following problems: * Safe repository of the state of the server. I'd rather build the logic of my application as a bunch of transformers (similar to Apache Camel) and store the state in a 'safe' place * This repository should be distributed: in case one of these data stores crashes, one or two more should be up and I should be able to switch to them seamlessly * This repository should be fast. Single digits milliseconds count here, in other words, systems which consume/process this data are automated systems, not humans clicking on links. This system needs to have high-throughput and low latency. By sending my data outside the process, I am necessarily slowing performance, but I am trying to balance absolute raw speed and absolute protection of data. * This repository should be safe. Similar to the point about several on-line backups, this system needs to write data to disk (potentially more than one disk). I'd really like to stop writing my own 'transaction servers.' Am I correct to be looking into projects such as jboss cache, ehcache, etc.? Thanks

    Read the article

  • Eclipse / Aptana File Sync Solutions

    - by Brad
    Our development team uses Eclipse + Aptana to do their web development work. Currently, most of them are mapping their Eclipse projects directly to the web server. I'd rather them create a local project and use that to sync to the web server project directory they are working on. The issue is that there aren't any good solutions which is just appalling given the popularity of the two. The FileSync plugin for Eclipse is only one-way. Meaning if another developer makes a change to the file on the server, another dev isn't even notified and could overwrite the change. The File Transfer option in Aptana 2.0 doesn't support any sort of Sync, just manually uploading/downloading files. The Sync option in Aptana 1.5.1 doesn't allow you to merge files when they are different. You can only update one or the other. It does however allow you to view a diff (but only if you right click and select) and in that diff you can't make any changes. I did find a way to allow files to be uploaded to their Sync repositories in Aptana using Eclipse Monkey. However it doesn't work if a user saves multiple files at once, 'Save All', again it doesn't work. And additionally, there is no notification if a user opens a local file that has an updated copy on the server. I tried to add one using Eclipse Monkey but I couldn't find any sort of listener in the Eclipse API to do it and any Eclipse Monkey documentation is far and few between. My only solution at this point is just to let them continue to map directly to the server or ask them to do a manual download before they do any work (but again what if someone uploads a change right after they do that). Anyone have any ideas?

    Read the article

  • Change CulturalInfo after button click

    - by Bart
    i have multilingual asp.net site. there is masterpage and default.aspx in masterpage i put two buttons one to click when i want to change the language to english, second for polish. I want to change the language after click on these buttons (and all changes should appear automatically on the page) here is a code for both: protected void EnglishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "en-US"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } protected void PolishButton_Click(object sender, ImageClickEventArgs e) { string selectedLanguage = "pl-PL"; //Sets the cookie that is to be used by InitializeCulture() in content page HttpCookie cookie = new HttpCookie("CultureInfo"); cookie.Value = selectedLanguage; Response.Cookies.Add(cookie); Server.Transfer(Request.Path); } in default.aspx.cs i have InitializeCulture(): protected override void InitializeCulture() { HttpCookie cookie = Request.Cookies["CultureInfo"]; // if there is some value in cookie if (cookie != null && cookie.Value != null) { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(cookie.Value); Thread.CurrentThread.CurrentUICulture = new CultureInfo(cookie.Value); } else // if none value has been sent by cookie, set default language { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("pl-PL"); Thread.CurrentThread.CurrentUICulture = new CultureInfo("pl-PL"); } base.InitializeCulture(); } i added resource files and in one label i show actual culture: Welcome.Text = "Culture: " + System.Globalization.CultureInfo.CurrentCulture.ToString(); the problem is that when i run this app and click e.g. english button (default language is polish), there is no effect. if i click it second time or press F5, the changes are applies and in the label is Culture: en-US. the same happens if i want to change language back to polish (it works after second click (or one click and refresh)). What am i doing wrong?

    Read the article

  • Using jQuery delay() with separate elements

    - by boomturn
    I want to fake a simple animation on page load by sequentially revealing several pngs, each appearing instantly but with unique, precisely timed delay before the next one. Using the latest version of jQuery (1.4.2 at writing), which provides a delay method. My first (braindead) impulse was: $('#image1').show('fast').delay(800); $('#image2').show('fast').delay(1200); $('#image3').show('fast'); Of course all come up simultaneously. jQuery doc examples refer to a single item being chained, not handling multiple items. OK... try again using nested callbacks from show(): $('#image1').show('fast', function() { $('#image2').show('fast', function() { $('#image3').show('fast', function() { }); }); }); Looks ugly (imagine doing 10, also wonder what processor hit would be) but hey, it works sequentially! Without delay though. So... chain delay() after show() using a callback function from it (not even sure if it supports one): $('#image1').show('fast').delay(800, function() { $('#image2').show('fast').delay(1200, function() { $('#image3').show('fast'); }); }); Hm. Opposite result of what I thought might happen: the callback works, but still no delay. So... should I be using some combination of queue and delay instead of nested callbacks? jQuery doc examples for queue again only use one element for example. Guessing this would involve using a custom queue, but am uncertain how to implement this. Or maybe I'm missing a simple way? (For a simple problem!)

    Read the article

  • stock management of assemblys and its sub parts

    - by The Disintegrator
    I have to track the stock of individual parts and kits (assembly) and can't find a satisfactory way of doing this. Sample bogus and hyper simplified database: Table prod: prodID 1 prodName Flux capacitor prodCost 900 prodPrice 1350 (900*1.5) prodStock 3 - prodID 2 prodName Mr Fusion prodCost 300 prodPrice 600 (300*2) prodStock 2 - prodID 3 prodName Time travel kit prodCost 1650 (1350+300) prodPrice 2145 (1650*1.3) prodStock 2 Table rels relID 1 relSrc 1 (Flux capacitor) relType 4 (is a subpart of) relDst 3 (Time travel kit) - relID 2 relSrc 2 (Mr Fusion) relType 4 (is a subpart of) relDst 3 (Time travel kit) prodPrice: it's calculated based on the cost but not in a linear way. In this example for costs of 500 or less, the markup is a 200%. For costs of 500-1000 the markup is 150%. For costs of 1000+ the markup is 130% That's why the time travel kit is much cheaper than the individual parts prodStock: here is my problem. I can sell kits or the individual parts, So the stock of the kits is virtual. The problem when I buy: Some providers sell me the Time Travel kit as a whole (with one barcode) and some sells me the individual parts (with a different barcode) So when I load the stock I don't know how to impute it. The problem when I sell: If I only sell kits, calculate the stock would be easy: "I have 3 Flux capacitors and 2 Mr Fusions, so I have 2 Time travel kits and a Flux Capacitor" But I can sell Kits or individual parts. I have to track the stock of the individual parts and the possible kits at the same time (and I have to compensate for the sell price) Probably this is really simple, but I can't see a simple solution. Resuming: I have to find a way of tracking the stock and the database/program is the one who has to do it (I cant ask the clerk to correct the stock) I'm using php+MySql. But this is more a logical problem than a programing one

    Read the article

  • R: dev.copy2pdf, multiple graphic devices to a single file, how to append to file?

    - by Timtico
    Hi everybody, I have a script that makes barplots, and opens a new window when 6 barplots have been written to the screen and keeps opening new graphic devices whenever necessary. Depending on the input, this leaves me with a potential large number of openened windows (graphic devices) which I would like to write to a single PDF file. Considering my Perl background, I decided to iterate over the different graphics devices, printing them out one by one. I would like to keep appending to a single PDF file, but I do not know how to do this, or if this is even possible. I would like to avoid looping in R. :) The code I use: for (i in 1:length(dev.list()) { dev.set(which = dev.list()[i] dev.copy2pdf(device = quartz, file = "/Users/Tim/Desktop/R/Filename.pdf") } However, this is not working as it will overwrite the file each time. Now is there an append function in R, like there is in Perl. Which allows me to keep adding pages to the existing pdf file? Or is there a way to contain the information in a graphic window to a object, and keep adding new graphic devices to this object and finally print the whole thing to a file? Other possible solutions I thought about: writing different pdf files, combining them after creation (perhaps even possible in R, with the right libraries installed?) copying the information in all different windows to one big graphic device and then print this to a pdf file.

    Read the article

  • Data sharing amongst JPA Entities

    - by Nick
    Setup: I have a simple web app that has a handfull of forms, each on a separate page. These forms represent patient data. There is a one-to-one relationship between patient and all these forms/entities. Each form maps directly to a db table and a JPA entity, maybe not the best architecture but it works and is simple. Question: If form/entity A and form/entity B share a common chunk of data (one of more fields), what is the best way to handle that in JPA. I.E. - If the data gets inserted via form A, I need it to show up in form B as existing data and vice versa. In other words its logical for both entities to contain that data. I believe I will have to move the common data into its own entity and define the relationships that way, but I have tried many different ways and none gets me all the way, at least with basic JPA. Can this be done through pure JPA relationships or will I have to write a bunch of code to make this happen manually. Not looking for code specifically, just the correct way to model this data. Thanks.

    Read the article

  • Which knowledge base/rule-based inference engine to choose for real time Runway incursion prevention

    - by Piligrim
    Hello, we are designing a project that would listen to dialog between airport controllers and pilots to prevent runway incursions (eg. one airplane is taking off while other is crossing the runway). Our professor wants us to use Jena for knowledge base (or anything else but it should be some sort of rule-based engine). Inference is not the main thing in Jena and there's not much documentation and examples of this. So we need an engine that would get messages from pilots as input and output possible risks of incursion or any other error in message protocol. It should be easy to write rules, and should be easy to provide engine with real time data. I image it something like this: A pilot sends a message that he lands on some runway, the system remembers that the runway is busy and no one should cross it If someone is given an instruction to cross this runway, the engine should fire a rule that something is wrong When the pilot sends a message that he left the runway and goes to the gate, the system clears the runway and lets other planes to use it. So is Jena, or prolog or any other rules engine suitable for this? I mean it is suitable, but do we really need to use it? I asked the prof. if we could just keep state of the runway and use some simple checks based on messages we receive and he said that it is not scalable and we need the knowledge base. Can someone give me any advise on which approach to use for this system? If you recommend k.b., then which one should we use? The project is written in java. Thank you.

    Read the article

  • Windows mobile phone for business application - or design way out of scrolling problem

    - by Peter
    We have a business application written for Windows mobile. It has people doing inventories. The fastest way to do this is to keep one hand on the product being counted and the other on the phone. The five-way lets you hit enter to accept the current correct inventory (the norm) or to move one left to remove one. This is very fast. If I am two low, I have a quick glance at the phone- move my eyes back to count the next row - and with my right hand hit the five way "left left enter". Very fast. Our problem is that with I-Mate out of business, we cannot find a modern cell phone that has a decent size screen and a five way. Everyone is going to candy bar and flicks. That works great if it is two hands and you are looking at the phone, but not efficient for us. Other than the $1,500 ruggadized units, is there anybody who makes a business style Windows Mobile with a five way? If not, any ideas on how to design the phone to make it easier without a five way to quickly enter data like this?

    Read the article

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • How to know when StatusBar size changed (iPhone)

    - by JOM
    I have UITabBarController with 2 tabs. One resizes just fine, when StatusBar size changes (emulator "Toggle In-Call Status Bar" menu item). The other one doesn't. The problematic tab item contains a static view, which dynamically loads one or another view depending on certain things. While getting this setup working I discovered that main tab view did NOT automagically send e.g. viewWillAppear and viewWillDisappear messages to my dynamic subviews. Apple docs explained this was because dynamically added views were not recognized by the system. @interface MyTabViewController : UIViewController { UIView *mainView; FirstViewController *aController; SecondViewController *bController; } ... if (index == 0) { self.aController = [[FirstViewController alloc] initWithNibName:@"FirstViewController" bundle:nil]; [self.mainView addSubview:aController.view]; [self.aController viewWillAppear:YES]; } How can I get StatusBar size changed event into my dynamic subviews? The "didChangeStatusBarFrame" doesn't work, as documented elsewhere.

    Read the article

  • perl multiple tasks problem

    - by Alice Wozownik
    I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks. A few problems: How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits? Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well. Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.

    Read the article

  • In-document schema declarations and lxml

    - by shylent
    As per the official documentation of lxml, if one wants to validate a xml document against a xml schema document, one has to construct the XMLSchema object (basically, parse the schema document) construct the XMLParser, passing the XMLSchema object as its schema argument parse the actual xml document (instance document) using the constructed parser There can be variations, but the essense is pretty much the same no matter how you do it, - the schema is specified 'externally' (as opposed to specifying it inside the actual xml document). If you follow this procedure, the validation occurs, sure enough, but if I understand it correctly, that completely ignores the whole idea of the schemaLocation and noNamespaceSchemaLocation attributes from xsi. This introduces a whole bunch of limitations, starting with the fact, that you have to deal with instance<-schema relation all by yourself (either store it externally or write some hack to retrieve the schema location from the root element of the instance document), you can not validate the document using multiple schemata (say, when each schema governs its own namespace) and so on. So the question is: maybe I am missing something completely trivial or doing it wrong? Or are my statements about lxml's limitations regarding schema validation true? To recap, I'd like to be able to: have the parser use the schema location declarations in the instance document at parse/validation time use multiple schemata to validate a xml document declare schema locations on non-root elements (not of extreme importance) Maybe I should look for a different library? Although, that'd be a real shame, - lxml is a de-facto xml processing library for python and is regarded by everyone as the best one in terms of performace/features/convenience (and rightfully so, to a certain extent)

    Read the article

  • WPF DataTemplate - Overlay

    - by David Ward
    I have a class that I need to provide the datatemplate for. Currently I have two datatemplates, one for when Enabled == true and one for when Enabled == false. The datatemplate for the class is actually the one below which changes the template used based on a trigger: <DataTemplate x:Key="ActionNodeTemplateSelector"> <ContentPresenter Content="{Binding}" Name="cp" /> <DataTemplate.Triggers> <DataTrigger Binding="{Binding Enabled}" Value="True"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeTemplate}" /> </DataTrigger> <DataTrigger Binding="{Binding Enabled}" Value="False"> <Setter TargetName="cp" Property="ContentTemplate" Value="{StaticResource ActionNodeDisabledTemplate}" /> </DataTrigger> </DataTemplate.Triggers> </DataTemplate> This all works well. However, now I want to also display an image overlay if the "Completed" property is true and a different image if it is incomplete. I could easily carry on using my approach to trigger on the Completed property as well but this would double the number of templates and feels wrong. Is there a way that I could have a trigger on my DataTemplate that will allow me to overlay the correct image over the existing template which is based on the Enabled property?

    Read the article

  • Using the MVVM Light Toolkit to make Blendable applications

    - by Dave
    A while ago, I posted a question regarding switching between a Blend-authored GUI and a Visual Studio-authored one. I got it to work okay by adding my Blend project to my VS2008 project and then changing the Startup Application and recompiling. This would result in two applications that had completely different GUIs, yet used the exact same ViewModel and Model code. I was pretty happy with that. Now that I've learned about the Laurent Bugnion's MVVM Light Toolkit, I would really like to leverage his efforts to make this process of supporting multiple GUIs for the same backend code possible. The question is, does the toolkit facilate this, or am I stuck doing it my previous way? I've watched his video from MIX10 and have read some of the articles about it online. However, I've yet to see something that indicates that there is a clean way to allow a user to either dynamically switch GUIs on the fly by loading a different DLL. There are MVVM templates for VS2008 and Blend 3, but am I supposed to create both types of projects for my application and then reference specific files from my VS2008 solution? UPDATE I re-read some information on Laurent's site, and seemed to have forgotten that the whole point of the template was to allow the same solution to be opened in VS2008 and Blend. So anyhow, with this new perspective it looks like the templates are actually intended to use a single GUI, most likely designed entirely in Blend (with the convenience of debugging through VS2008), and then be able to use two different ViewModels -- one for design-time, and one for runtime. So it seems to me like the answer to my question is that I want to use a combination of my previous solution, along with the MVVM Light Toolkit. The former will allow me to make multiple, distinct GUIs around my core code, while the latter will make designing fancy GUIs in Blend easier with the usage of a design-time ViewModel. Can anyone comment on this?

    Read the article

  • Adapting existing HTML/Javascript model to Titanium's latest release (v 0.9)

    - by Alan Neal
    In pre-0.9 versions of Titanium, one could simply specify an .html file (local or remote) in the tiapp.xml file and interact with it in the same manner as one would on a website. As of version 0.9, that is no the longer case. One creates their entire app dynamically. Unfortunately, this broke my previous implementation and, other than an updated Kitchen Sink, much of the new model and API calls are not covered in the documentation (e.g., createLabel). So, my question is this... What are the simplest steps for re-creating the previous effect (knowingly forgoing some of the advantages of the Titanium's latest approach if necessary)? My previous implementation was exactly as it functions on the website. The website has a single index.html file with no content other than links to JavaScript and style files. The document body's onload event called the first JavaScript function (located in the main script) and, from that point forth, the entire content was dynamically created. How can I set up the latest version of Titanium so that I am poised to do the exact same thing? BTW: Whereas I previously had the choice to keep the files local or remote, I don't believe that remote access (e.g., simply using the webView widget to point to the website) is viable. That's because pages displayed via the webView do not have access to most of the API. Since the iPhone and Safari browsers do not support the file input type, the only means for uploading files (something my app requires) is calling Titanium's function. Thanks in advance.

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Winforms: How to speed up Invalidate()?

    - by Pedery
    I'm developing a retained mode drawing application in GDI+. The application can draw simple shapes to a canvas and perform basic editing. The math that does this is optimized to the last byte and is not an issue. I'm drawing on a panel that is using the built-in Controlstyles.DoubleBuffer. Now, my problem arises if I run my app maximized on a big monitor (HD in my case). If I try to draw a line from one corner of the (big) canvas to the diagonally oposite other, it will start to lag and the CPU goes high up. Each graphical object in my app has a boundingbox. Thus, when I invalidate the boundingbox of a line that goes from one corner of the maximized app to the oposite diagonal one, that boundingbox is virtually as big as the canvas. When a user is drawing a line, this invalidation of the boundingbox thus happens on the mousemove event, and there is a clear lag visible. This lag also exists if the line is the only object on the canvas. I've tried to optimize this in many ways. If I draw a shorter line, the CPU and the lag goes down. If I remove the Invalidate() and keep all other code, the app is quick. If I use a Region (that only spans the figure) to invalidate instead of the boundingbox, it is just as slow. If I split the boundingbox into a range of smaller boxes that lie back to back, thus reducing the invalidation area, no visible performance gain can be seen. Thus I'm at a loss here. How can I speed up the invalidation? On a side note, both Paint.Net and Mspaint suffers from the same shortcommings. Word and PowerPoint however, seem to be able to paint a line as described above with no lag and no CPU load at all. Thus it's possible to achieve the desired results, the question is how?

    Read the article

  • mtl, transformers, monads-fd, monadLib, and the paradox of choice

    - by yairchu
    Hackage has several packages for monad transformers: mtl: Monad transformer library transformers: Concrete functor and monad transformers monads-fd: Monad classes, using functional dependencies monads-tf: Monad classes, using type families monadLib: A collection of monad transformers. mtl-tf: Monad transformer library using type families mmtl: Modular Monad transformer library mtlx: Monad transformer library with type indexes, providing 'free' copies. compose-trans: Composable monad transformers (and maybe I missed some) Which one shall we use? mtl is the one in the Haskell Platform, but I keep hearing on reddit that it's uncool. But what's bad about choice anyway, isn't it just a good thing? Well, I saw how for example the authors of data-accessor had to make all these to cater to just the popular choices: data-accessor-monadLib library: Accessor functions for monadLib's monads data-accessor-monads-fd library: Use Accessor to access state in monads-fd State monad class data-accessor-monads-tf library: Use Accessor to access state in monads-tf State monad type family data-accessor-mtl library: Use Accessor to access state in mtl State monad class data-accessor-transformers library: Use Accessor to access state in transformers State monad I imagine that if this goes on and for example several competing Arrow packages evolve, we might see something like: spoonklink-arrows-transformers, spoonklink-arrows-monadLib, spoonklink-tfArrows-transformers, spoonklink-tfArrows-monadLib, ... And then I worry that if spoonklink gets forked, Hackage will run out of disk space. :) Questions: Why are there so many monad transformer packages? Why is mtl [considered] uncool? What are the key differences? Most of these seemingly competing packages were written by Andy Gill and are maintained by Ross Paterson. Does this mean that these packages are not competing but rather work together in some way? And do Andy and Ross consider any of their own packages as obsolete? Which one should me and you use?

    Read the article

  • Namespaces combined with TFS / Source Control explanation

    - by Christian
    As an ISV company we slowly run into the "structure your code"-issue. We mainly develop using Visual Studio 2008 and 2010 RC. Languages c# and vb.net. We have our own Team Foundation Server and of course we use Source Control. When we started developing based on the .NET Framework, we also begun using Namespaces in a primitive way. With the time we 'became more mature', i mean we learned to use the namespaces and we structured the code more and more, but only in the solution scope. Now we have about 100 different projects and solutions in our Source Safe. We realized that many of our own classes are coded very redundant, i mean, a Write2Log, GetExtensionFromFilename or similar Function can be found between one and 20 times in all these projects and solutions. So my idea is: Creating one single kind of root folder in Source Control and start an own namespace-hierarchy-structure below this root, let's name it CompanyName. A Write2Log class would then be found in CompanyName.System.Logging. Whenever we create a new solution or project and we need a log function, we will 'namespace' that solution and place it accordingly somewhere below the CompanyName root folder. To have the logging functionality we then import (add) the existing project to the solution. Those 20+ projects/solutions with the write2log class can then be maintained in one single place. To my questions: - is that a good idea, the philosophy of namespaces and source control? - There must be a good book explaining the Namespaces combined with Source Control, yes? any hints/directions/tips? - how do you manage your 50+ projects?

    Read the article

  • Extract wrong data from a frame in C?

    - by ipkiss
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data. Because in C, one byte can only holds one character and in the start field of the frame, it is 0x02 which means STX which is 3 characters. So, in order to test my program, On the sender side, I construct an array as the frame formatted above like: char frame[254]; frame[0] = 0x02; // starting field frame[1] = 0x41; // command field which is character 'A' ..so on.. And, then On the receiver side, I take out the fields like: char result[254]; // read data read(result); printf("command = %c", result[1]); // get the command field of the frame // get other field's values the command field value (result[1]) is not character 'A'. I think, this because the first field value of the frame is 0x02 (STX) occupying 3 first places in the array frame and leading to the wrong results on the receiver side. How can I correct the issue or am I doing something wrong at the sender side? Thanks all. related questions: http://stackoverflow.com/questions/2500567/parse-and-read-data-frame-in-c http://stackoverflow.com/questions/2531779/clear-data-at-serial-port-in-linux-in-c

    Read the article

  • SSIS Lookup with Lookup Component Vs Script Component.

    - by Nev_Rahd
    Hello, I need to load Dimensions from EDW Tables (which does maintain historical records) and is of type Key-Value-Parameter. My scenario is ok if got a record in EDW as below Key1 Key2 Code Value EffectiveDate EndDate CurrentFlag 100 555 01 AAA 2010-01-01 11.00.00 9999-12-31 Y 100 555 02 BBB 2010-01-01 11.00.00 9999-12-31 Y This need to be loaded into DM by pivoting it as key1 and key2 combinations makes Natural key for DM SK NK 01 02 EffectiveDate EndDate CurrentFlag 1 100-555 AAA BBB 2010-01-01 11.00.00 9999-12-31 Y My ssis package does this all good pivoting... looking up the incoming NK in DIM.. if new will insert .. else with further lookup with effective date and determine if the incoming for same natural key got any new (change) in attribute.. if so updates the current record byy setting its end date and insert the new one with new attribute value and pulling the recent records values for other attributes. My problem is if the same natural key comes twice with same attribute in single extract my first lookup which on natural key .. will let both records pass and try to insert.. where its fails. If i get distinct records on NK the second is not picked and need to run package again. So my question how can i configure lookup or alernative way to handle this scenario when same NK comes twice in single extract, would be able to insert first record if not exists in Dim table and for second one should be able to updated with the changes with reference to one inserted above. Not sure this makes sense what am trying to explain. Will attached the screenshot once back to work desk (on monday). Thanks

    Read the article

  • Versioning SharePoint binary Workflow ASPX task forms

    - by Janis Veinbergs
    Hello. As noted by some developers, workflow versioning is somekind of headache in SharePoint. I`m wondering is there a way I can version my aspx forms? For sure, i can version code behind assemblies, but if markup changes for any of my files in LAYOUTS folder? Is there versioning available for files or do i have to choose new filename for my form? Sorry, i should have been more specific. Yes, i have files under version control (i can restore previous versions etc), but i`m not talking about this kind of version control. But by deploying new Workflow Version, i must not delete old one, because it is still running on many items in SharePoint, but rather , as noted in previous links, deploy new one so i don't break execution of workflows. But workflows will still break if i don't preserve old aspx forms used by users to interact with workflows. So i must ensure that Assemblies with old version numbers used by old workflow exists (this one is ok, i just changed assembly version number and deployed to GAC) I must ensure that old workflow still uses old aspx form used users to interact with workflow, but new workflow version should use new aspx form with more options (how to do this?).

    Read the article

< Previous Page | 646 647 648 649 650 651 652 653 654 655 656 657  | Next Page >