Search Results

Search found 9106 results on 365 pages for 'course'.

Page 333/365 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • Google Code + SVN or GitHub + Git

    - by Nazgulled
    Let me start by telling you that I never used anything besides SVN and I'm also a Windows user. I have a couple of simple projects that are open-source, others are on there way when I'm happy enough to release their source code but either way, I was thinking of using Google Code and SVN to share the source code of my projects instead of providing a link to the source on my website. This as always been a pain cause I had to update the binaries and the code every time I released a new version. This would also help me out to have a backup of my code some where instead of just my local machine (I used to have a local Subversion server running). What I want from a service like this is very simple... I just want a place to store my source code that people can download if they want, allows me to control revisions and provide a simple and easy issue system so people can submit bugs and stuff like that. I guess both of them have this. But I don't want to host any binaries in their websites, I want this to be hosted on my website so I can control download statistics with my own scripts, I also don't have the need for wiki pages as I prefer to have all the documentation in my own website. Does anyone of this services provide a way to "disable" features like wiki and downloads and don't show them at all for my project(s)? Now, I'm sure there are lots of pros and cons about using Google Code with SVN and GitHub with Git (of course) but here's what it's important for me on each one and why I like them: Google Code: As with any Google page, the complexity is almost non-existent Everyone (or almost) as a Google account and this is nice if people want to report problems using the issues system GitHub: May (or may not) be a little more complex (not a problem for me though) than Google's pages but... ...has a much prettier interface than Google's service It needs people to be registered on GitHub to post about issues I like the fact that with Git, you have your own revisions locally (can I use TortoiseGit for this or?) Basically that's it, not much I know... What other, most common, pros and cons can you tell me about each site/software? Keep in mind that my projects are simple, I'm probably the only one who will ever develop these projects on these repositories (or maybe not, for now I will)

    Read the article

  • Strange problem publishing a fresh Ruby on Rails 3 application on localhost (Apache, Passenger and VirtualHosts)

    - by user502052
    I recently created a new Ruby on Rails 3 application locally on a Mac OS, named "test". Since I use apache2, in the private/etc/apache2/httpd.conf I set the VirtualHost for the "test" application: <VirtualHost *:443> ServerName test.pjtmain.localhost:443 DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:80> ServerName test.pjtmain.localhost DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> </VirtualHost> Of course I restart apache2, but trying to access to http://test.pjtmain.localhost/ I have this error message from: FIREFOX Oops! Firefox could not find test.pjtmain.localhost Suggestions: * Search on Google: ... SAFARI Safari can’t find the server. Safari can’t open the page “http://test.pjtmain.localhost/” because Safari can’t find the server “test.pjtmain.localhost”. I have other RoR3 applications setted like that above in the httpd.conf file and all them work. What is the problem (maybe it is not related to apache...)? Notes: 1. Using the 'Network Uility' I did a Ping with the following result: ping: cannot resolve test.pjtmain.localhost: Unknown host and I did a Lookup with the follonwing result: ; <<>> DiG 9.6.0-APPLE-P2 <<>> test.pjtmain.localhost +multiline +nocomments +nocmd +noquestion +nostats +search ;; global options: +cmd <MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. 115 IN SOA dns1.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. dnsmaster.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. ( 2010110500 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) 2. I am using Phusion Passenger 3. Since I not changed nothing to the new "test" application, I expect to see the default RoR index.html page: 4. It seems that in the 'Console Messages' there is any warning or error

    Read the article

  • what to do with a flawed C++ skills test

    - by Mike Landis
    In the following gcc.gnu.org post, Nathan Myers says that a C++ skills test at SANS Consulting Services contained three errors in nine questions: Looking around, one of fthe first on-line C++ skills tests I ran across was: http://www.geekinterview.com/question_details/13090 I looked at question 1... find(int x,int y) { return ((x<y)?0:(x-y)):} call find(a,find(a,b)) use to find (a) maximum of a,b (b) minimum of a,b (c) positive difference of a,b (d) sum of a,b ... immediately wondering why would anyone write anything so obtuse. Getting past the absurdity, I didn't really like any of the answers, immediately eliminating (a) and (b) because you can get back zero (which is neither a nor b) in a variety of circumstances. Sum or difference seemed more likely, except that you could also get zero regardless of the magnitudes of a and b. So... I put Matlab to work (code below) and found: when either a or b is negative you get zero; when b a you get a; otherwise you get b, so the answer is (b) min(a,b), if a and b are positive, though strictly speaking the answer should be none of the above because there are no range restrictions on either variable. That forces test takers into a dilemma - choose the best available answer and be wrong in 3 of 4 quadrants, or don't answer, leaving the door open to the conclusion that the grader thinks you couldn't figure it out. The solution for test givers is to fix the test, but in the interim, what's the right course of action for test takers? Complain about the questions? function z = findfunc(x,y) for i=1:length(x) if x(i) < y(i) z(i) = 0; else z(i) = x(i) - y(i); end end end function [b,d1,z] = plotstuff() k = 50; a = [-k:1:k]; b = (2*k+1) * rand(length(a),1) - k; d1 = findfunc(a,b); z = findfunc(a,d1); plot( a, b, 'r.', a, d1, 'g-', a, z, 'b-'); end

    Read the article

  • Hibernate3: Self-Referencing Objects

    - by monojohnny
    Need some help on understanding how to do this; I'm going to be running recursive 'find' on a file system and I want to keep the information in a single DB table - with a self-referencing hierarchial structure: This is my DB Table structure I want to populate. DirObject Table: id int NOT NULL, name varchar(255) NOT NULL, parentid int NOT NULL); Here is the proposed Java Class I want to map (Fields only shown): public DirObject { int id; String name; DirObject parent; ... For the 'root' directory was going to use parentid=0; real ids will start at 1, and ideally I want hibernate to autogenerate the ids. Can somebody provide a suggested mapping file for this please; as a secondary question I thought about doing the Java Class like this instead: public DirObject { int id; String name; List<DirObject> subdirs; Could I use the same data model for either of these two methods ? (With a different mapping file of course). --- UPDATE: so I tried the mapping file suggested below (thanks!), repeated here for reference: <hibernate-mapping> <class name="my.proj.DirObject" table="category"> ... <set name="subDirs" lazy="true" inverse="true"> <key column="parentId"/> <one-to-many class="my.proj.DirObject"/> </set> <many-to-one name="parent" class="my.proj.DirObject" column="parentId" cascade="all" /> </class> ...and altered my Java class to have BOTH 'parentid' and 'getSubDirs' [returning a 'HashSet']. This appears to work - thanks, but this is the test code I used to drive this - I think I'm not doing something right here, because I thought Hibernate would take care of saving the subordinate objects in the Set without me having to do this explicitly ? DirObject dirobject=new DirObject(); dirobject.setName("/files"); dirobject.setParent(dirobject); DirObject d1, d2; d1=new DirObject(); d1.setName("subdir1"); d1.setParent(dirobject); d2=new DirObject(); d2.setName("subdir2"); d2.setParent(dirobject); HashSet<DirObject> subdirs=new HashSet<DirObject>(); subdirs.add(d1); subdirs.add(d2); dirobject.setSubdirs(subdirs); session.save(dirobject); session.save(d1); session.save(d2);

    Read the article

  • Doing some stuff right before the user exits the page

    - by Mike
    I have seen some questions here regarding what I want to achieve and have based what I have so far on those answer. But there is a slight misbehavior that is still irritating me. What I have is sort of a recovery feature. Whenever you are typing text, the client sends a sync request to the server every 45 seconds. It does 2 things. First, it extends the lease the client has on the record (only one person may edit at one time) for another 60 seconds. Second, it sends the text typed so far to the server in case the server crashes, internet connection fails, etc. In that case, the next time the user enters our application, the user is notified that something has gone wrong and that some text was recovered. Think of Microsoft or OpenOffice recovery whenever they crash! Of course, if the user leaves the page willingly, the user does not need to be notified and as a result, the recovery is deleted. I do that final request via a beforeunload event. Everything went fine until I was asked to make a final adjustment... The same behavior you have here at stack overflow when you exit the editor... a confirm dialogue. This works so far, BUT, the confirm dialogue is shown twice. Here is the code. The event if (local.sync.autosave_textelement) { window.onbeforeunload = exitConfirm; } The function function exitConfirm() { var local = Core; if (confirm('blub?')) { local.sync.autosave_destroy = true; sync(false); return true; } else { return false; } }; Some problem irrelevant clarifications: Core is a global Object that contains a lot of variables that are used everywhere. sync makes an ajax request. The values are based on the values that the Core.sync object contains. The parameter determines if the call should be async (default) or sync. Edit 1 I did try to separate both things (recovery deletion and user confirmation that is) into beforeunload and unload. The problem there was that unload is a bit too late. The user gets informed that there is a recovery even though it is scheduled to be deleted. If you refresh the page 1 second later, the dialogue disappears as the file was deleted by then.

    Read the article

  • .NET and C# Exceptions. What is it reasonable to catch.

    - by djna
    Disclaimer, I'm from a Java background. I don't do much C#. There's a great deal of transfer between the two worlds, but of course there are differences and one is in the way Exceptions tend to be thought about. I recently answered a C# question suggesting that under some circstances it's reasonable to do this: try { some work } catch (Exeption e) { commonExceptionHandler(); } (The reasons why are immaterial). I got a response that I don't quite understand: until .NET 4.0, it's very bad to catch Exception. It means you catch various low-level fatal errors and so disguise bugs. It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed, so even if the callExceptionReporter fuunction tries to log and quit, it may not even get to that point (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database). May I'm more confused than I realise, but I don't agree with some of that. Please would other folks comment. I understand that there are many low level Exceptions we don't want to swallow. My commonExceptionHandler() function could reasonably rethrow those. This seems consistent with this answer to a related question. Which does say "Depending on your context it can be acceptable to use catch(...), providing the exception is re-thrown." So I conclude using catch (Exception ) is not always evil, silently swallowing certain exceptions is. The phrase "Until .NET 4 it is very bad to Catch Exception" What changes in .NET 4? IS this a reference to AggregateException, which may give us some new things to do with exceptions we catch, but I don't think changes the fundamental "don't swallow" rule. The next phrase really bothers be. Can this be right? It also means that in the event of some kind of corruption that triggers such an exception, any open finally blocks on the stack will be executed (the finally blocks may throw again, or cause more corruption, or delete something important from the disk or database) My understanding is that if some low level code had lowLevelMethod() { try { lowestLevelMethod(); } finally { some really important stuff } } and in my code I call lowLevel(); try { lowLevel() } catch (Exception e) { exception handling and maybe rethrowing } Whether or not I catch Exception this has no effect whatever on the excution of the finally block. By the time we leave lowLevelMethod() the finally has already run. If the finally is going to do any of the bad things, such as corrupt my disk, then it will do so. My catching the Exception made no difference. If It reaches my Exception block I need to do the right thing, but I can't be the cause of dmis-executing finallys

    Read the article

  • How to implement a tagging plugin for jQuery

    - by anxiety
    Goal: To implement a jQuery plugin for my rails app (or write one myself, if necessary) that creates a "box" around text after a delimiter is typed. Example: With tagging on SO, the user begins typing a tag, then selects one from the drop-down list provided. The input field recognizes that a tag has been selected, puts a space and then is ready for the next tag. Similarly, I am attempting to use this plugin to put a box around the previously entered tag before moving to to accept the next tag/input. The instructions in the README.txt seem simple enough, however I have been receiving a $(".tagbox").tagbox is not a function error when debugging my app with firebug. Here is what I have in my application.js: $(document).ready( function(){ $('.tagbox').tagbox({ separator: /\[,]/, // specifying comma separation for <code>tags</code> }); }); And here is my _form.html.erb: <% form_for @tag do |f| %> <%= f.error_messages %> <p> <%= f.label :name %><br /> <%= text_field :tag, :name, { :method => :get, :class => "tagbox" } %> </p> <p><%= f.submit "Submit" %></p> <% end %> I have omitted some other code (namely the implementation of an autocomplete plugin) existing within my _form.html.erb and application.js for sake of readability. The inclusion or exclusion of this omitted code does not affect the performance of this plugin. I have included all of the necessary files for the tagbox plugin (as well as application.js after all other included JS files) within the javascript_include_tag in my application.html.erb file. I'm pretty much confused as to why I'd be getting this "not a function" error when jquery.tagbox.js clearly defines the function and is included in the head of my html page in question. I've been struggling with this plugin for longer than I'd like to admit, so any help would really be appreciated. And, of course, I'm open to any other plugins or from-scratch suggestions you may have in mind.. This tagbox plugin does not seem to have a wealth of documentation or any currently working examples. Also to note, I'm trying to avoid using jrails. Thanks for your time

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • Implementing coroutines in Java

    - by JUST MY correct OPINION
    This question is related to my question on existing coroutine implementations in Java. If, as I suspect, it turns out that there is no full implementation of coroutines currently available in Java, what would be required to implement them? As I said in that question, I know about the following: You can implement "coroutines" as threads/thread pools behind the scenes. You can do tricksy things with JVM bytecode behind the scenes to make coroutines possible. The so-called "Da Vinci Machine" JVM implementation has primitives that make coroutines doable without bytecode manipulation. There are various JNI-based approaches to coroutines also possible. I'll address each one's deficiencies in turn. Thread-based coroutines This "solution" is pathological. The whole point of coroutines is to avoid the overhead of threading, locking, kernel scheduling, etc. Coroutines are supposed to be light and fast and to execute only in user space. Implementing them in terms of full-tilt threads with tight restrictions gets rid of all the advantages. JVM bytecode manipulation This solution is more practical, albeit a bit difficult to pull off. This is roughly the same as jumping down into assembly language for coroutine libraries in C (which is how many of them work) with the advantage that you have only one architecture to worry about and get right. It also ties you down to only running your code on fully-compliant JVM stacks (which means, for example, no Android) unless you can find a way to do the same thing on the non-compliant stack. If you do find a way to do this, however, you have now doubled your system complexity and testing needs. The Da Vinci Machine The Da Vinci Machine is cool for experimentation, but since it is not a standard JVM its features aren't going to be available everywhere. Indeed I suspect most production environments would specifically forbid the use of the Da Vinci Machine. Thus I could use this to make cool experiments but not for any code I expect to release to the real world. This also has the added problem similar to the JVM bytecode manipulation solution above: won't work on alternative stacks (like Android's). JNI implementation This solution renders the point of doing this in Java at all moot. Each combination of CPU and operating system requires independent testing and each is a point of potentially frustrating subtle failure. Alternatively, of course, I could tie myself down to one platform entirely but this, too, makes the point of doing things in Java entirely moot. So... Is there any way to implement coroutines in Java without using one of these four techniques? Or will I be forced to use the one of those four that smells the least (JVM manipulation) instead?

    Read the article

  • A NSMutableArray is destroying my life!

    - by camilo
    EDITED to show the relevant part of the code Hi. There's a strange problem with an NSMutableArray which I'm just not understanding... Explaining: I have a NSMutableArray, defined as a property (nonatomic, retain), synthesized, and initialized with 29 elements. realSectionNames = [[NSMutableArray alloc] initWithCapacity:29]; After the initialization, I can insert elements as I wish and everything seems to be working fine. While I'm running the application, however, if I insert a new element in the array, I can print the array in the function where I inserted the element, and everything seems ok. However, when I select a row in the table, and I need to read that array, my application crashes. In fact, it cannot even print the array anymore. Is there any "magical and logical trick" everybody should know when using a NSMutableArray that a beginner like myself can be missing? Thanks a lot. I declare my array as realSectionNames = [[NSMutableArray alloc] initWithCapacity:29]; I insert objects in my array with [realSectionNames addObject:[category categoryFirstLetter]]; although I know i can also insert it with [realSectionNames insertObject:[category categoryFirstLetter] atIndex:i]; where the "i" is the first non-occupied position. After the insertion, I reload the data of my tableView. Printing the array before or after reloading the data shows it has the desired information. After that, selecting a row at the table makes the application crash. This realSectionNames is used in several UITableViewDelegate functions, but for the case it doesn't matter. What truly matters is that printing the array in the beginning of the didSelectRowAtIndexPath function crashes everything (and of course, doesn't print anything). I'm pretty sure it's in that line, for printing anything he line before works (example): NSLog(@"Anything"); NSLog(@"%@", realSectionNames); gives the output: 2010-03-24 15:16:04.146 myApplicationExperience[3527:207] Anything [Session started at 2010-03-24 15:16:04 +0000.] GNU gdb 6.3.50-20050815 (Apple version gdb-967) (Tue Jul 14 02:11:58 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-apple-darwin".sharedlibrary apply-load-rules all Attaching to process 3527. Still not understanding what kind of stupidity I've done this time... maybe it's not too late to follow the career of brain surgeon?

    Read the article

  • def constrainedMatchPair(firstMatch,secondMatch,length):

    - by smart
    matches of a key string in a target string, where one of the elements of the key string is replaced by a different element. For example, if we want to match ATGC against ATGACATGCACAAGTATGCAT, we know there is an exact match starting at 5 and a second one starting at 15. However, there is another match starting at 0, in which the element A is substituted for C in the key, that is we match ATGC against the target. Similarly, the key ATTA matches this target starting at 0, if we allow a substitution of G for the second T in the key string. consider the following steps. First, break the key string into two parts (where one of the parts could be an empty string). Let's call them key1 and key2. For each part, use your function from Problem 2 to find the starting points of possible matches, that is, invoke starts1 = subStringMatchExact(target,key1) and starts2 = subStringMatchExact(target,key2) The result of these two invocations should be two tuples, each indicating the starting points of matches of the two parts (key1 and key2) of the key string in the target. For example, if we consider the key ATGC, we could consider matching A and GC against a target, like ATGACATGCA (in which case we would get as locations of matches for A the tuple (0, 3, 5, 9) and as locations of matches for GC the tuple (7,). Of course, we would want to search over all possible choices of substrings with a missing element: the empty string and TGC; A and GC; AT and C; and ATG and the empty string. Note that we can use your solution for Problem 2 to find these values. Once we have the locations of starting points for matches of the two substrings, we need to decide which combinations of a match from the first substring and a match of the second substring are correct. There is an easy test for this. Suppose that the index for the starting point of the match of the first substring is n (which would be an element of starts1), and that the length of the first substring is m. Then if k is an element of starts2, denoting the index of the starting point of a match of the second substring, there is a valid match with one substitution starting at n, if n+m+1 = k, since this means that the second substring match starts one element beyond the end of the first substring. finally the question is Write a function, called constrainedMatchPair which takes three arguments: a tuple representing starting points for the first substring, a tuple representing starting points for the second substring, and the length of the first substring. The function should return a tuple of all members (call it n) of the first tuple for which there is an element in the second tuple (call it k) such that n+m+1 = k, where m is the length of the first substring.

    Read the article

  • ASP.NET MVC: How can I explain an invalid type violation to an end-user with Html.ValidationSummary?

    - by Terminal Frost
    Serious n00b warning here; please take mercy! So I finished the Nerd Dinner MVC Tutorial and I'm now in the process of converting a VB.NET application to ASP.NET MVC using the Nerd Dinner program as a sort of rough template. I am using the "IsValid / GetRuleViolations()" pattern to identify invalid user input or values that violate business rules. I am using LINQ to SQL and am taking advantage of the "OnValidate()" hook that allows me to run the validation and throw an application exception upon trying to save changes to the database via the CustomerRepository class. Anyway, everything works well, except that by the time the form values reach my validation method invalid types have already been converted to a default or existing value. (I have a "StreetNumber" property that is an integer, though I imagine this would be a problem for DateTime or any other non-strings as well.) Now, I am guessing that the UpdateModel() method throws an exception and then alters the value because the Html.ValidationMessage is displayed next to the StreetNumber field but my validation method never sees the original input. There are two problems with this: While the Html.ValidationMessage does signal that something is wrong, there is no corresponding entry in the Html.ValidationSummary. If I could even get the exception message to show up there indicating an invalid cast or something that would be better than nothing. My validation method which resides in my Customer partial class never sees the original user input so I do not know if the problem is a missing entry or an invalid type. I can't figure out how I can keep my validation logic nice and neat in one place and still get access to the form values. I could of course write some logic in the View that processes the user input, however that seems like the exact opposite of what I should be doing with MVC. Do I need a new validation pattern or is there some way to pass the original form values to my model class for processing? CustomerController Code // POST: /Customers/Edit/[id] [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, FormCollection formValues) { Customer customer = customerRepository.GetCustomer(id); try { UpdateModel(customer); customerRepository.Save(); return RedirectToAction("Details", new { id = customer.AccountID }); } catch { foreach (var issue in customer.GetRuleViolations()) ModelState.AddModelError(issue.PropertyName, issue.ErrorMessage); } return View(customer); }

    Read the article

  • Remove links with Javascript

    - by Arlen Beiler
    How do I remove links from a webpage with Javascript. I am using Google Chrome. The code I tried is: function removehyperlinks() { try { alert(document.anchors.length); alert(document.getElementsByTagName('a')); for(i=0;i=document.anchors.length;i++) { var a = document.anchors[i]; a.outerHTML = a.innerHTML; var b = document.getElementsByTagName('a'); b[i].outerHTML = b[i].innerHTML; } } catch(e) { alert (e);} alert('done'); } Of course, this is test code, which is why I have the alerts and 2 things trying at the same time. The first alert returns "0" the second [Object NodeList] and the third returns "done". My html body looks like this: <body onload="removehyperlinks()"> <ol style="text-align:left;" class="messagelist"> <li class="accesscode"><a href="#">General information, Updates, &amp; Meetings<span class="extnumber">141133#</span></a> <ol> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li><a href="#">...</a></li> <li start="77"><a href="#"">...</a></li> <li start="88"><a href="#">...</a></li> <li start="99"><a href="#">...</a></li> </ol> </li> </ol> </body>

    Read the article

  • Mysql select - improve performances

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated. Edit: Here is the explain +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | products_loans | range | CENA_OD,CENA_DO,KOD | KOD | 92 | NULL | 190158 | Using where | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ I have tried the combined index and it improved the performance on the test server from 0.44 sec to 0.06 sec, I cant access the production server from home though, so I will have to try it tomorrow.

    Read the article

  • output with "Private`" Content in Mathematica Package

    - by madalina
    Hello everyone, I am trying to solve the following implementation problem in Mathematica 7.0 for some days now and I do not understand exactly what is happening so I hope someone can give me some hints. I have 3 functions that I implemented in Mathematica in a source file with extension *.nb. They are working okay to all the examples. Now I want to put these functions into 3 different packages. So I created three different packages with extension .*m in which I put all the desired Mathematica function. An example in the "stereographic.m" package which contain the code: BeginPackage["stereographic`"] stereographic::usage="The package stereographic...." formEqs::usage="The function formEqs[complexBivPolyEqn..." makePoly::usage="The function makePoly[algebraicEqn] ..." getFixPolys::usage="The function..." milnorFibration::usage="The function..." Begin["Private`"] Share[]; formEqs[complex_,{m_,n_}]:=Block[{complexnew,complexnew1, realeq, imageq, expreal, expimag, polyrealF, polyimagF,s,t,u,v,a,b,c,epsilon,x,y,z}, complexnew:=complex/.{m->s+I*t,n->u+I*v}; complexnew1:=complexnew/.{s->(2 a epsilon)/(1+a^2+b^2+c^2),t->(2 b epsilon)/(1+a^2+b^2+c^2),u->(2 c epsilon)/(1+a^2+b^2+c^2),v->(- epsilon+a^2 epsilon+b^2 epsilon+c^2 epsilon)/(1+a^2+b^2+c^2)}; realeq:=ComplexExpand[Re[complexnew1]]; imageq:=ComplexExpand[Im[complexnew1]]; expreal:=makePoly[realeq]; expimag:=makePoly[imageq]; polyrealF:=expreal/.{a->x,b->y,c->z}; polyimagF:=expimag/.{a->x,b->y,c->z}; {polyrealF,polyimagF} ] End[] EndPackage[] Now to test the function I load the package Needs["stereographic`"] everything is okay. But when I test the function for example with formEqs[x^2-y^2,{x,y}] I get the following ouput: {Private`epsilon^2 + 2 Private`x^2 Private`epsilon^2 + Private`x^4 Private`epsilon^2 - 6 Private`y^2 Private`epsilon^2 + 2 Private`x^2 Private`y^2 Private`epsilon^2 + Private`y^4 Private`epsilon^2 - 6 Private`z^2 Private`epsilon^2 + 2 Private`x^2 Private`z^2 Private`epsilon^2 + 2 Private`y^2 Private`z^2 Private`epsilon^2 + Private`z^4 Private`epsilon^2, 8 Private`x Private`y Private`epsilon^2 + 4 Private`z Private`epsilon^2 - 4 Private`x^2 Private`z Private`epsilon^2 - 4 Private`y^2 Private`z Private`epsilon^2 - 4 Private`z^3 Private`epsilon^2} Of course I do not understand why Private` appears in front of any local variable which I returned in the final result. I would want not to have this Private` in the computed output. Any idea or better explanations which could indicate me why this happens? Thank you very much for your help. Best wishes, madalina

    Read the article

  • dynamically embedding youtube videos with jquery

    - by danwoods
    Hello all, I'm trying to retrieve a listing of a user's youtube videos and embed them in a page using jQuery. My code looks something like this: $(document).ready(function() { //some variables var fl_obj_template = $('<object width="260" height="140">' + '<param name="movie" value=""></param>' + '<param name="allowFullScreen" value="true"></param>' + '<param name="allowscriptaccess" value="always"></param>' + '<embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="260" height="140"></embed>' + '</object>'); var video_elm_arr = $('.video'); //hide videos until ready $('.video').addClass('hidden'); //pull video data from youtube $.ajax({ url: 'http://gdata.youtube.com/feeds/api/users/username/uploads?alt=json', dataType: 'jsonp', success: function(data) { $.each(data.feed.entry, function(i,item){ //only take the first 7 videos if(i > 6) return; //give the video element a flash object var cur_flash_obj = fl_obj_template; //assign title $(video_elm_arr[i]).find('.video_title').html(item.title.$t); //clean url var video_url = item.media$group.media$content[0].url; var index = video_url.indexOf("?"); if (index > 0) video_url = video_url.substring(0, index); //and asign it to the player's parameters $(cur_flash_obj).find('object param[name="movie"]').attr('value', video_url); $(cur_flash_obj).find('object embed').attr('src', video_url); //alert(cur_flash_obj); //insert flash object in video element $(video_elm_arr[i]).append(cur_flash_obj); //and show $(video_elm_arr[i]).removeClass('hidden'); }); } }); }); (of course with 'username' being the actual username). The video titles appear correctly but no videos show up. What gives? The target html looks like: <div id="top_row_center" class="video_center video"> <p class="video_title"></p> </div>

    Read the article

  • grdb not working variables

    - by stupid_idiot
    hi, i know this is kinda retarded but I just can't figure it out. I'm debugging this: xor eax,eax mov ah,[var1] mov al,[var2] call addition stop: jmp stop var1: db 5 var2: db 6 addition: add ah,al ret the numbers that I find on addresses var1 and var2 are 0x0E and 0x07. I know it's not segmented, but that ain't reason for it to do such escapades, because the addition call works just fine. Could you please explain to me where is my mistake? I see the problem, dunno how to fix it yet though. The thing is, for some reason the instruction pointer starts at 0x100 and all the segment registers at 0x1628. To address the instruction the used combination is i guess [cs:ip] (one of the segment registers and the instruction pointer for sure). The offset to var1 is 0x10 (probably because from the begining of the code it's the 0x10th byte in order), i tried to examine the memory and what i got was: 1628:100 8 bytes 1628:108 8 bytes 1628:110 <- wtf? (assume another 8 bytes) 1628:118 ... whatever tricks are there in the memory [cs:var1] points somewhere else than in my code, which is probably where the label .data would usually address ds.... probably.. i don't know what is supposed to be at 1628:10 ok, i found out what caused the assness and wasted me whole fuckin day. the behaviour described above is just correct, the code is fully functional. what i didn't know is that grdb debugger for some reason sets the begining address to 0x100... the sollution is to insert the directive ORG 0x100 on the first line and that's the whole thing. the code was working because instruction pointer has the right address to first instruction and goes one by one, but your assembler doesn't know what effective address will be your program stored at so it pretty much remains relative to first line of the code which means all the variables (if not using label for data section) will remain pointing as if it started at 0x0. which of course wouldn't work with DOS. and grdb apparently emulates some DOS features... sry for the language, thx everyone for effort, hope this will spare someone's time if having the same problem... heheh.. at least now i know the reason why to use .data section :))))

    Read the article

  • Why is this simple Mobile Form not closed when using the player

    - by ajhvdb
    Hi, I created this simple sample Form with the close button. Everything is working as expected when NOT using the Interop.WMPLib.dll I've seen other applications using this without problems but why isn't the Form process closed when I just add the line: SoundPlayer myPlayer = new SoundPlayer(); and of course dispose it: if (myPlayer != null) { myPlayer.Dispose(); myPlayer = null; } The Form closes but the debugger VS2008 is still active. The Form project and the dll are still active. If you send me an email to [email protected], I can send you the zipped project. Below is the class for the dll: using System; using System.Collections.Generic; using System.Text; using System.Threading; using System.Runtime.InteropServices; using WMPLib; namespace WindowsMobile.Utilities { public delegate void SoundPlayerStateChanged(SoundPlayer sender, SoundPlayerState newState); public enum SoundPlayerState { Stopped, Playing, Paused, } public class SoundPlayer : IDisposable { [DllImport("coredll")] public extern static int waveOutSetVolume(int hwo, uint dwVolume); [DllImport("coredll")] public extern static int waveOutGetVolume(int hwo, out uint dwVolume); WindowsMediaPlayer myPlayer = new WindowsMediaPlayer(); public SoundPlayer() { myPlayer.uiMode = "invisible"; myPlayer.settings.volume = 100; } string mySoundLocation = string.Empty; public string SoundLocation { get { return mySoundLocation; } set { mySoundLocation = value; } } public void Pause() { myPlayer.controls.pause(); } public void PlayLooping() { Stop(); myPlayer.URL = mySoundLocation; myPlayer.settings.setMode("loop", true); } public int Volume { get { return myPlayer.settings.volume; } set { myPlayer.settings.volume = value; } } public void Play() { Stop(); myPlayer.URL = mySoundLocation; myPlayer.controls.play(); } public void Stop() { myPlayer.controls.stop(); myPlayer.close(); } #region IDisposable Members public void Dispose() { try { Stop(); } catch (Exception) { } // need this otherwise the process won't exit?! try { int ret = Marshal.FinalReleaseComObject(myPlayer); } catch (Exception) { } myPlayer = null; GC.Collect(); } #endregion } }

    Read the article

  • How should I handle the case in which a username is already in use?

    - by idealmachine
    I'm a JavaScript programmer and new to PHP and MySQL (want to get into server-side coding). Because I'm trying to learn PHP by building a simple online game (more specifically, correspondence chess), I'm starting by implementing a simple user accounts system. Of course, user registration comes first. What are the best practices for: How I should handle the (likely) possibility that when a user tries to register, the username he has chosen is already in use, particularly when it comes to function return values?($result === true is rather ugly, and I'm not sure whether checking the MySQL error code is the best way to do it either) How to cleanly handle varying page titles?($gPageTitle = '...'; require_once 'bgsheader.php'; is also rather ugly) Anything else I'm doing wrong? In some ways, PHP is rather different from JavaScript... Here is a (rather large) excerpt of the code I have written so far. Note that this is a work in progress and is missing security checks that I will add as my next step. function addUser( $username, $password ) { global $gDB, $gPasswordSalt; $stmt = $gDB->prepare( 'INSERT INTO user(user_name, user_password, user_registration) VALUES(?, ?, NOW())' ); $stmt || trigger_error( 'Failed to prepare statement: ' . htmlspecialchars( $gDB->error ) ); $hashedPassword = hash_hmac( 'sha256', $password, $gPasswordSalt, true ); $stmt->bind_param( 'ss', $username, $hashedPassword ); if( $stmt->execute() ) { return true; } elseif( $stmt->errno == 1062) { return 'exists'; } else { trigger_error( 'Failed to execute statement: ' . htmlspecialchars( $stmt->error ) ); } } $username = $_REQUEST['username']; $password = $_REQUEST['password']; $result = addUser( $username, $password ); if( $result === true ) { $gPageTitle = 'Registration successful'; require_once 'bgsheader.php'; echo '<p>You have successfully registered as ' . htmlspecialchars( $username ) . ' on this site.</p>'; } elseif( $result == 'exists' ) { $gPageTitle = 'Username already taken'; require_once 'bgsheader.php'; echo '<p>Someone is already using the username you have chosen. Please try using another one instead.'; } else { trigger_error('This should never happen'); } require_once 'bgsfooter.php';

    Read the article

  • Subband decomposition using Daubechies filter

    - by misha
    I have the following two 8-tap filters: h0 ['-0.010597', '0.032883', '0.030841', '-0.187035', '-0.027984', '0.630881', '0.714847', '0.230378'] h1 ['-0.230378', '0.714847', '-0.630881', '-0.027984', '0.187035', '0.030841', '-0.032883', '-0.010597'] Here they are on a graph: I'm using it to obtain the approximation (lower subband of an image). This is a(m,n) in the following diagram: I got the coefficients and diagram from the book Digital Image Processing, 3rd Edition, so I trust that they are correct. The star symbol denotes one dimensional convolution (either over rows or over columns). The down arrow denotes downsampling in one dimension (either over rows, or columns). My problem is that the filter coefficients for h0 and h1 sum to greater than 1 (approximately 1.4 or sqrt(2) to be exact). Naturally, if I convolve any image with the filter, the image will get brighter. Indeed, here's what I get (expected result on right): Can somebody suggest what the problem is here? Why should it work if the convolution filter coefficients sum to greater than 1? I have the source code, but it's quite long so I'm hoping to avoid posting it here. If it's absolutely necessary, I'll put it up later. EDIT What I'm doing is: Decompose into subbands Filter one of the subbands Recompose subbands into original image Note that the point isn't just to have a displayable subband-decomposed image -- I have to be able to perfectly reconstruct the original image from the subbands as well. So if I scale the filtered image in order to compensate for my decomposition filter making the image brighter, this is what I will have to do: Decompose into subbands Apply intensity scaling Filter one of the subbands Apply inverse intensity scaling Recompose subbands into original image Step 2 performs the scaling. This is what @Benjamin is suggesting. The problem is that then step 4 becomes necessary, or the original image will not be properly reconstructed. This longer method will work. However, the textbook explicitly says that no scaling is performed on the approximation subband. Of course, it's possible that the textbook is wrong. However, what's more possible is I'm misunderstanding something about the way this all works -- this is why I'm asking this question.

    Read the article

  • Creating multiple heads in remote repository

    - by Jab
    We are looking to move our team (~10 developers) from SVN to mercurial. We are trying to figure out how to manage our workflow. In particular, we are trying to see if creating remote heads is the right solution. We currently have a very large repository with multiple, related projects. They share a lot of code, but pieces of the project are deployed by different teams (3 teams) independent of other portions of the code-base. So each team is working on concurrent large features. The way we currently handles this in SVN are branches. Team1 has a branch for Feature1, same deal for the other teams. When Team1 finishes their change, it gets merged into the trunk and deployed out. The other teams follow suite when their project is complete, merging of course. So my initial thought are using Named Branches for these situations. Team1 makes a Feature1 branch off of the default branch in Hg. Now, here is the question. Should the team PUSH that branch, in it's current/half-state to the repository. This will create a second head in the core repo. My initial reaction was "NO!" as it seems like a bad idea. Handling multiple heads on our repository just sounds awful, but there are some advantages... First, the teams want to setup Continuous Integration to build this branch during their development cycle(months long). This will only work if the CI can pull this branch from the repo. This is something we do now with SVN, copy a CI build and change the branch. Easy. Second, it makes it easier for any team member to jump onto the branch and start working. Without pushing to the core repo, they would have to receive a push from a developer on that team with the changeset information. It is also possible to lose local commits to hardware failure. The chances increase a lot if it's a branch by a single developer who has followed the "don't push until finished" approach. And lastly is just for ease of use. The developers can easily just commit and push on their branch at any time without consequence(as they do today, in their SVN branches). Is there a better way to handle this scenario that I may be missing? I just want a veteran's opinion before moving forward with the strategy. For bug fixes we like the general workflow of mecurial, anonymous branches that only consist of 1-2 commits. The simplicity is great for those cases. By the way, I've read this , great article which seems to favor Named branches.

    Read the article

  • get renamed file names of multiple upload form [js array] in codeigniter

    - by artmania
    Hi friends, I use codeigniter. I have a multiple image upload form. The code below is working well for uploading, but I also need to save file names to database. How can I get the names in here? I spent hours & hours :/ but could not sort it :/ Appreciate helps!!! uploadform.php echo form_open_multipart('gallery/upload'); <input type="file" name="photo" size="50" /> <input type="file" name="thumb" size="50" /> <input type="submit" value="Upload" /> </form> I have a controller between form view and model load model (of course : )) but didnt post here because of no need. gallery_model.php function multiple_upload($upload_dir = 'uploads/', $config = array()) { /* Upload */ $CI =& get_instance(); $files = array(); if(empty($config)) { $config['upload_path'] = realpath($upload_dir); $config['allowed_types'] = 'gif|jpg|jpeg|jpe|png'; $config['max_size'] = '2048'; } $CI->load->library('upload', $config); $errors = FALSE; foreach($_FILES as $key => $value) { if( ! empty($value['name'])) { if( ! $CI->upload->do_upload($key)) { $data['upload_message'] = $CI->upload->display_errors(ERR_OPEN, ERR_CLOSE); // ERR_OPEN and ERR_CLOSE are error delimiters defined in a config file $CI->load->vars($data); $errors = TRUE; } else { // Build a file array from all uploaded files $files[] = $CI->upload->data(); } } } // There was errors, we have to delete the uploaded files if($errors) { foreach($files as $key => $file) { @unlink($file['full_path']); } } elseif(empty($files) AND empty($data['upload_message'])) { $CI->lang->load('upload'); $data['upload_message'] = ERR_OPEN.$CI->lang->line('upload_no_file_selected').ERR_CLOSE; $CI->load->vars($data); } else { return $files; } /* ------------------------------- Insert to database */ // problem is here, i need file names to add db. // if there is already same names file at the folder, it rename file itself. so in such case, I need renamed file name :/ } }

    Read the article

  • BULK INSERT from one table to another all on the server

    - by steve_d
    I have to copy a bunch of data from one database table into another. I can't use SELECT ... INTO because one of the columns is an identity column. Also, I have some changes to make to the schema. I was able to use the export data wizard to create an SSIS package, which I then edited in Visual Studio 2005 to make the changes desired and whatnot. It's certainly faster than an INSERT INTO, but it seems silly to me to download the data to a different computer just to upload it back again. (Assuming that I am correct that that's what the SSIS package is doing). Is there an equivalent to BULK INSERT that runs directly on the server, allows keeping identity values, and pulls data from a table? (as far as I can tell, BULK INSERT can only pull data from a file) Edit: I do know about IDENTITY_INSERT, but because there is a fair amount of data involved, INSERT INTO ... SELECT is kinda of slow. SSIS/BULK INSERT dumps the data into the table without regards to indexes and logging and whatnot, so it's faster. (Of course creating the clustered index on the table once it's populated is not fast, but it's still faster than the INSERT INTO...SELECT that I tried in my first attempt) Edit 2: The schema changes include (but are not limited to) the following: 1. Splitting one table into two new tables. In the future each will have its own IDENTITY column, but for the migration I think it will be simplest to use the identity from the original table as the identity for the both new tables. Once the migration is over one of the tables will have a one-to-many relationship to the other. 2. Moving columns from one table to another. 3. Deleting some cross reference tables that only cross referenced 1-to-1. Instead the reference will be a foreign key in one of the two tables. 4. Some new columns will be created with default values. 5. Some tables aren’t changing at all, but I have to copy them over due to the "put it all in a new DB" request.

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >