Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 771/883 | < Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • New record may be written twice in clusterd index structure

    - by Cupidvogel
    As per the article at Microsoft, under the Test 1: INSERT Performance section, it is written that For the table with the clustered index, only a single write operation is required since the leaf nodes of the clustered index are data pages (as explained in the section Clustered Indexes and Heaps), whereas for the table with the nonclustered index, two write operations are required—one for the entry into the index B-tree and another for the insert of the data itself. I don't think that is necessarily true. Clustered Indexes are implemented through B+ tree structures, right? If you look at at this article, which gives a simple example of inserting into a B+ tree, we can see that when 8 is initially inserted, it is written only once, but then when 5 comes in, it is written to the root node as well (thus written twice, albeit not initially at the time of insertion). Also when 8 comes in next, it is written twice, once at the root and then at the leaf. So won't it be correct to say, that the number of rewrites in case of a clustered index is much less compared to a NIC structure (where it must occur every time), instead of saying that rewrite doesn't occur in CI at all?

    Read the article

  • Efficient Clojure workflow?

    - by Alex B
    I am developing a pet project with Clojure, but wonder if I can speed up my workflow a bit. My current workflow (with Compojure) is: Start Swank with lein swank. Go to Emacs, connect with M-x slime-connect. Load all existing source files one by one. This also starts a Jetty server and an application. Write some code in REPL. When satisfied with experiments, write a full version of a construct I had in mind. Eval (C-c C-c) it. Switch REPL to namespace where this construct resides and test it. Switch to browser and reload browser tab with the affected page. Tweak the code, eval it, check in the browser. Repeat any of the above. There are a number of annoyances with it: I have to switch between Emacs and the browser (or browsers if I am testing things like templating with multiple browsers) all the time. Is there a common idiom to automate this? I used to have a JavaScript bit that reloads the page continuously, but it's of limited utility, obviously, when I have to interact with the page for more than a few seconds. My JVM instance becomes "dirty" when I experiment and write test functions. Basically namespaces become polluted, especially if I'm refactoring and moving the functions between namespaces. This can lead to symbol collisions and I need to restart Swank. Can I undef a symbol? I load all source files one by one (C-c C-k) upon restarting Swank. I suspect I'm doing it all wrong. Switching between the REPL and the file editor can be a bit irritating, especially when I have a lot of Emacs tabs open, alongside the browser(s). I'm looking for ways to improve the above points and the entire workflow in general, so I'd appreciate if you'd share yours. P. S. I have also used Vimclojure before, so Vimclojure-based workflows are welcome too.

    Read the article

  • Why do the outputs differ when I run this code using Netbeans 6.8 and Eclipse?

    - by Vimal Basdeo
    When I am running the following codes using Eclipse and Netbeans 6.8. I want to see the available COM ports on my computer. When running in Eclipse it is returning me all available COm ports but when running t in Netbeans, it does not seem to find any ports .. public static void test(){ Enumeration lists=CommPortIdentifier.getPortIdentifiers(); System.out.println(lists.hasMoreElements()); while (lists.hasMoreElements()){ CommPortIdentifier cn=(CommPortIdentifier)lists.nextElement(); if ((CommPortIdentifier.PORT_SERIAL==cn.getPortType())){ System.out.println("Name is serail portzzzz "+cn.getName()+" Owned status "+cn.isCurrentlyOwned()); try{ SerialPort port1=(SerialPort)cn.open("ComControl",800000); port1.setSerialPortParams(9600, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE); System.out.println("Before get stream"); OutputStream out=port1.getOutputStream(); InputStream input=port1.getInputStream(); System.out.println("Before write"); out.write("AT".getBytes()); System.out.println("After write"); int sample=0; //while((( sample=input.read())!=-1)){ System.out.println("Before read"); //System.out.println(input.read() + "TEsting "); //} System.out.println("After read"); System.out.println("Receive timeout is "+port1.getReceiveTimeout()); }catch(Exception e){ System.err.println(e.getMessage()); } } else{ System.out.println("Name is parallel portzzzz "+cn.getName()+" Owned status "+cn.isCurrentlyOwned()+cn.getPortType()+" "); } } } Output with Netbeans false Output using Eclipse true Name is serail portzzzz COM1 Owned status false Before get stream Before write After write Before read After read Receive timeout is -1 Name is serail portzzzz COM2 Owned status false Before get stream Before write After write Before read After read Receive timeout is -1 Name is parallel portzzzz LPT1 Owned status false2 Name is parallel portzzzz LPT2 Owned status false2

    Read the article

  • Using javascript and php together

    - by EmmyS
    I have a PHP form that needs some very simple validation on submit. I'd rather do the validation client-side, as there's quite a bit of server-side validation that happens to deal with writing form values to a database. So I just want to call a javascript function onsubmit to compare values in two password fields. This is what I've got: function validate(form){ var password = form.password.value; var password2 = form.password2.value; alert("password:"+password+" password2:" + password2); if (password != password2) { alert("not equal"); document.getElementByID("passwordError").style.display="inline"; return false; } alert("equal"); return true; } The idea being that a default-hidden div containing an error message would be displayed if the two passwords don't match. The alerts are just to display the values of password and password2, and then again to indicate whether they match or not (will not be used in production code). I'm using an input type=submit button, and calling the function in the form tag: <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" onsubmit="return validate(this);"> Everything is alerting as expected when entering non-matching values. I would have hoped (and assumed, based on past use) that if the function returned false, the actual submit would not occur. And yet, it is. I'm testing by entering non-matching values in the password fields, and the alerts clearly show me the values and the not equal result, but the actual form action is still occurring and it's trying to write to my database. I'm pretty new at PHP; is there something about it that will not let me combine with javascript this way? Would it be better to use an input type=button and include submit() in the function itself if it returns true?

    Read the article

  • Programatically enable/disable menuBar buttons in Flex 4

    - by Hamid
    I have the following XML in my Flex4 (AIR) project that defines the start of my menu interface: <mx:MenuBar x="0" y="0" width="100%" id="myMenuBar" labelField="@label" itemClick="menuChange(event)"> <mx:dataProvider> <s:XMLListCollection> <fx:XMLList xmlns=""> <menu label="File"> <item label="New"/> <item label="Load"/> <item label="Save" enabled="false"/> </menu> <menu label="Help"> <item label="About"/> </menu> </fx:XMLList> </s:XMLListCollection> </mx:dataProvider> </mx:MenuBar> I am trying to find the syntax that will let me set the save button to enabled=true after a file has been loaded by clicking "Load", however I can't figure out the syntax, can someone make a suggestion please. Currently the way that button clicks are detected is by a Switch/Case testing the String result of the MenuEvent event.item.@label. Maybe this isn't the best way?

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • Fake serial communication under Linux

    - by kigurai
    I have an application where I want to simulate the connection between a device and a "modem". The device will be connected to a serial port and will talk to the software modem through that. For testing purposes I want to be able to use a mock software device to test send and receive data. Example Python code device = Device() modem = Modem() device.connect(modem) device.write("Hello") modem_reply = device.read() Now, in my final app I will just pass /dev/ttyS1 or COM1 or whatever for the application to use. But how can I do this in software? I am running Linux and application is written in Python. I have tried making a FIFO (mkfifo ~/my_fifo) and that does work, but then I'll need one FIFO for writing and one for reading. What I want is to open ~/my_fake_serial_port and read and write to that. I have also lpayed with the ptymodule, but can't get that to work either. I can get a master and slave file descriptor from pty.openpty() but trying to read or write to them only causes IOError Bad File Descriptor error message.

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • My chance to shape our development process/policy

    - by Matt Luongo
    Hey guys, I'm sorry if this is a duplicate, but the question search terms are pretty generic. I work at a small(ish) development firm. I say small, but the company is actually a fair size; however, I'm only the second full-time developer, as most past work has been organized around contractors. I'm in a position to define internal project process and policy- obvious stuff like SCM and unit-testing. Methodology is outside the scope of the document I'm putting together, but I'd really like to push us in a leaner (and maybe even Agile?) direction. I feel like I have plenty of good practice recommendations, but not enough solid motivation to make my document the spirit guide I'd like it to be. I've separated the document into "principles" and "recommendations". Recommendations have been easy to come up with. Use SCM, strive for 1-step, regularly scheduled builds, unit test first, document as you go... Listing the principles that are supposed to be informing these recommendations, though, has been rough. I've come up with "tools work for us; we should never work for tools" and a hazy clause aimed at our QA (which has been overly manual) that I'd like to read "tedium is the root of all evil". I don't want to miss an opportunity with this document to give us a good in-house start and maybe even push us toward Agile. What principles am I missing?

    Read the article

  • EXC_BAD_ACCESS on button press for button dynamically added to UIView within UIScrollView

    - by alan
    OK. It's an iPad app. Within the DetailViewController I have added a UIScrollView through IB and within that UIScrollView I have added a UIView (also added through IB) which holds various dynamically added UITableViews, UILabels and UIButtons. My problem is that I'm getting errors on the UIButton clicks. I have defined this method: - (void)translationSearch:(id)sender { NSLog(@"in transearch"); [self doSearch]; } This is how I'm adding the UIButton to the UIView: UIButton *translationButton = [UIButton buttonWithType:UIButtonTypeRoundedRect]; translationButton.frame = CGRectMake(6, 200, 200, 20); translationButton.backgroundColor = [UIColor clearColor]; [translationButton setTitle:@"testing" forState:UIControlStateNormal]; [translationButton addTarget:self action:@selector(translationSearch:) forControlEvents:UIControlEventTouchUpInside]; [verbView addSubview:translationButton]; Now the button is added to the form without any issue but, when I press it, I am getting an EXC_BAD_ACCESS error. I'm sure it's staring me in the face but I've passed my usual time limit for getting a bug like this fixed so any help would be greatly appreciated. The only thing I can think is the fact that the UIButton is within a UIView which is within a UIScrollView which is within the view controller is somehow causing an issue. Cheers.

    Read the article

  • opengl shader make color "disappear"

    - by JFoulkes
    hi, I'm new to opengl and shaders. I'm trying to do some augmented reality on the iphone and messing about with shaders to alter a feed from the camera. What I'm trying to achieve is the appearance that an object in a picture has disappeared by setting the color to match the surrounding colour. I have a yellow rectangle and in it is a small red circle. I want to give the impressed the red circle has disappeared by setting the colour to be yellow. It won't always be solid colours but I'm just trying to get the basics down first. Currently I have a simple shader which will make a red colour lighter but this isn't ideal because it doesn't get close to the surrounding colour and I want this to work for different coloured objects and different coloured surrounding. I'm not even 100% shaders are what I need to be looking at or even opengl. I'm using it because of the performance it gives on the iPhone. I'm basically asking if: Anyone has done or seen anything similar Am I barking up the wrong tree using opengl es and opengl sl? Is this even possible? Cheers.

    Read the article

  • NSTimer as a timeout mechanism

    - by alexantd
    I'm pretty sure this is really simple, and I'm just missing something obvious. I have an app that needs to download data from a web service for display in a UITableView, and I want to display a UIAlertView if the operation takes more than X seconds to complete. So this is what I've got (simplified for brevity): MyViewController.h @interface MyViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { NSTimer *timer; } @property (nonatomic, retain) NSTimer *timer; MyViewController.m @implementation MyViewController @synthesize timer; - (void)viewDidLoad { timer = [NSTimer scheduledTimerWithTimeInterval:20 target:self selector:@selector(initializationTimedOut:) userInfo:nil repeats:NO]; [self doSomethingThatTakesALongTime]; [timer invalidate]; } - (void)doSomethingThatTakesALongTime { sleep(30); // for testing only // web service calls etc. go here } - (void)initializationTimedOut:(NSTimer *)theTimer { // show the alert view } My problem is that I'm expecting the [self doSomethingThatTakesALongTime] call to block while the timer keeps counting, and I'm thinking that if it finishes before the timer is done counting down, it will return control of the thread to viewDidLoad where [timer invalidate] will proceed to cancel the timer. Obviously my understanding of how timers/threads work is flawed here because the way the code is written, the timer never goes off. However, if I remove the [timer invalidate], it does.

    Read the article

  • Two radically different queries against 4 mil records execute in the same time - one uses brute force.

    - by IanC
    I'm using SQL Server 2008. I have a table with over 3 million records, which is related to another table with a million records. I have spent a few days experimenting with different ways of querying these tables. I have it down to two radically different queries, both of which take 6s to execute on my laptop. The first query uses a brute force method of evaluating possibly likely matches, and removes incorrect matches via aggregate summation calculations. The second gets all possibly likely matches, then removes incorrect matches via an EXCEPT query that uses two dedicated indexes to find the low and high mismatches. Logically, one would expect the brute force to be slow and the indexes one to be fast. Not so. And I have experimented heavily with indexes until I got the best speed. Further, the brute force query doesn't require as many indexes, which means that technically it would yield better overall system performance. Below are the two execution plans. If you can't see them, please let me know and I'll re-post then in landscape orientation / mail them to you. Brute-force query: Index-based exception query: My question is, based on the execution plans, which one look more efficient? I realize that thing may change as my data grows.

    Read the article

  • how to clear XFixes regions

    - by ~buratinas
    Hi, I'm writing some low level code for X11 platform. To achieve best data copying performance I use XFixes/XDamage extensions. How can I clear the contents of XFixes region after one refresh cycle? Or do they clean themselves after I use XFixesSetPictureClipRegion? My code is something like that: Display xdpy; XShamPixmap pixmap_; XFixesRegion region_; damage_event_callback(damage_geometry_t geometry, XDamage damage,...) { unsigned curr_region = XFixesCreateRegion(xdpy, 0, 0); XDamageSubtract(xdpy, damage, None, curr_region); XFixesTranslateRegion( xdpy, curr_region, geometry.left(), geometry.top() ); XFixesUnionRegion (xdpy, region_, region_, curr_region); } process_damage_events(...) { XFixesSetPictureClipRegion( xdpy, pixmap_, 0, 0, region_); XCopyArea (xdpy, window_->id(), pixmap_, XDefaultGC(xdpy, XDefaultScreen(xdpy)), 0,0,width(),height(),0,0); /*Should clear region_ here */ ... } Currently I clear the region by deleting and recreating, but I guess it's not the best way to do that.

    Read the article

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • OptimisticLockException in inner transaction ruins outer transaction

    - by Pace
    I have the following code (OLE = OptimisticLockException)... public void outer() { try { middle() } catch (OLE) { updateEntities(); outer(); } } @Transactional public void middle() { try { inner() } catch (OLE) { updateEntities(); middle(); } @Transactional public void inner() { //Do DB operation } inner() is called by other non-transactional methods which is why both middle() and inner() are transactional. As you can see, I deal with OLEs by updating the entities and retrying the operation. The problem I'm having is that when I designed things this way I was assuming that the only time one could get an OLE was when a transaction closed. This is apparently not the case as the call to inner() is throwing an OLE even when the stack is outer()->middle()->inner(). Now, middle() is properly handling the OLE and the retry succeeds but when it comes time to close the transaction it has been marked rollbackOnly by Spring. When the middle() method call finally returns the closing aspect throws an exception because it can't commit a transaction marked rollbackOnly. I'm uncertain what to do here. I can't clear the rollbackOnly state. I don't want to force create a transaction on every call to inner because that kills my performance. Am I missing something or can anyone see a way I can structure this differently? EDIT: To clarify what I'm asking, let me explain my main question. Is it possible to catch and handle OLE if you are inside of an @Transactional method? FYI: The transaction manager is a JpaTransactionManager and the JPA provider is Hibernate.

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

  • How to dynamically load a progressive jpeg/jpg in actionscrip-3 using Flash and know it's width/heig

    - by didibus
    Hi, I am trying to dynamically load a progressive jpeg using actionscript 3. To do so, I have created a class called Progressiveloader that creates a URLStream and uses it to streamload the progressive jpeg bytes into a byteArray. Everytime the byteArray grows, I use a Loader to loadBytes the byteArray. This works, to some extent, because if I addChild the Loader, I am able to see the jpeg as it is streamed, but I am unable to access the Loader's content and most importantly, I can not change the width and height of the Loader. After a lot of testing, I seem to have figured out the cause of the problem is that until the Loader has completely loaded the jpg, meaning until he actually sees the end byte of the jpg, he does not know the width and height and he does not create a content DisplayObject to be associated with the Loader's content. My question is, would there be a way to actually know the width and height of the jpeg before it is loaded? P.S.: I would believe this would be possible, because of the nature of a progressive jpeg, it is loaded to it's full size, but with less detail, so size should be known. Even when loading a normal jpeg in this way, the size is seen on screen, except the pixels which are not loaded yet are showing as gray. Thank You.

    Read the article

  • Leveraging hobby experience to get a job

    - by Bernard
    Like many other's I began programming at an early age. I started when I was 11 and I learned C when I was 14 (now 26). While most of what I did were games just to entertain myself I did everything from low level 2D graphics, and binary I/O, to interfacing with free API's, custom file systems, audio, 3D animations, OpenGL, web sites, etc. I worked on a wide variety of things trying to make various games. Because of this experience I have tested out of every college level C/C++ programming course I have ever been offered. In the classes I took, my classmates would need a week to do what I finished in class with an hour or two of work. I now have my degree now and I have 2 years of experience working full time as a web developer however I would like to get back into C++ and hopefully do simulation programming. Unfortunately I have yet to do C++ as a job, I have only done it for testing out of classes and doing my senior project in college. So most of what I have in C++ is still hobby experience and I don't know how to best convey that so that I don't end up stuck doing something too low level for me. Right now I see a job offer that requires 2 years of C++ experience, but I have at least 9 (I didn't do C++ everyday for the last 14 years). How do I convey my experience? How much is it truly worth? and How do I get it's full value? The best thing that I can think of is a demo and a portfolio, however that only comes into play after an interview has been secured. I used a portfolio to land my current job. All answers and advice are appreciated.

    Read the article

  • Non standard interaction among two tables to avoid very large merge

    - by riko
    Suppose I have two tables A and B. Table A has a multi-level index (a, b) and one column (ts). b determines univocally ts. A = pd.DataFrame( [('a', 'x', 4), ('a', 'y', 6), ('a', 'z', 5), ('b', 'x', 4), ('b', 'z', 5), ('c', 'y', 6)], columns=['a', 'b', 'ts']).set_index(['a', 'b']) AA = A.reset_index() Table B is another one-column (ts) table with non-unique index (a). The ts's are sorted "inside" each group, i.e., B.ix[x] is sorted for each x. Moreover, there is always a value in B.ix[x] that is greater than or equal to the values in A. B = pd.DataFrame( dict(a=list('aaaaabbcccccc'), ts=[1, 2, 4, 5, 7, 7, 8, 1, 2, 4, 5, 8, 9])).set_index('a') The semantics in this is that B contains observations of occurrences of an event of type indicated by the index. I would like to find from B the timestamp of the first occurrence of each event type after the timestamp specified in A for each value of b. In other words, I would like to get a table with the same shape of A, that instead of ts contains the "minimum value occurring after ts" as specified by table B. So, my goal would be: C: ('a', 'x') 4 ('a', 'y') 7 ('a', 'z') 5 ('b', 'x') 7 ('b', 'z') 7 ('c', 'y') 8 I have some working code, but is terribly slow. C = AA.apply(lambda row: ( row[0], row[1], B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))), axis=1).set_index(['a', 'b']) Profiling shows the culprit is obviously B.ix[row[0]].irow(np.searchsorted(B.ts[row[0]], row[2]))). However, standard solutions using merge/join would take too much RAM in the long run. Consider that now I have 1000 a's, assume constant the average number of b's per a (probably 100-200), and consider that the number of observations per a is probably in the order of 300. In production I will have 1000 more a's. 1,000,000 x 200 x 300 = 60,000,000,000 rows may be a bit too much to keep in RAM, especially considering that the data I need is perfectly described by a C like the one I discussed above. How would I improve the performance?

    Read the article

  • When and why can sprintf fail?

    - by Srekel
    I'm using swprintf to build a string into a buffer (using a loop among other things). const int MaxStringLengthPerCharacter = 10 + 1; wchar_t* pTmp = pBuffer; for ( size_t i = 0; i < nNumPlayers ; ++i) { const int nPlayerId = GetPlayer(i); const int nWritten = swprintf(pTmp, MaxStringLengthPerCharacter, TEXT("%d,"), nPlayerId); assert(nWritten >= 0 ); pTmp += nWritten; } *pTaskPlayers = '\0'; If during testing the assert never hits, can I be sure that it will never hit in live code? That is, do I need to check if nWritten < 0 and handle that, or can I safely assume that there won't be a problem? Under which circumstances can it return -1? The documentation more or less just states "If the function fails". In one place I've read that it will fail if it can't match the arguments (i.e. the formatting string to the varargs) but that doesn't worry me. I'm also not worried about buffer overrun in this case - I know the buffer is big enough.

    Read the article

< Previous Page | 767 768 769 770 771 772 773 774 775 776 777 778  | Next Page >