Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 715/798 | < Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >

  • How would I sort files to directories based on filenames?

    - by gnomed
    I have a huge number of files to sort all named in some terrible convention. Here are some examples: (4)_mr__mcloughlin____.txt 12__sir_john_farr____.txt (b)mr__chope____.txt dame_elaine_kellett-bowman____.txt dr__blackburn______.txt These names are supposed to be a different person (speaker) each. Someone in another IT department produced these from a ton of XML files using some script but the naming is unfathomably stupid as you can see. I need to sort literally tens of thousands of these files with multiple files of text for each person; each with something stupid making the filename different, be it more underscores or some random number. They need to be sorted by speaker. This would be easier with a script to do most of the work then I could just go back and merge folders that should be under the same name or whatever. There are a number of ways I was thinking about doing this. parse the names from each file and sort them into folders for each unique name. get a list of all the unique names from the filenames, then look through this simplified list of unique names for similar ones and ask me whether they are the same, and once it has determined this it will sort them all accordingly. I plan on using Perl, but I can try a new language if it's worth it. I'm not sure how to go about reading in each filename in a directory one at a time into a string for parsing into an actual name. I'm not completely sure how to parse with regex in perl either, but that might be googleable. For the sorting, I was just gonna use the shell command: `cp filename.txt /example/destination/filename.txt` but just cause that's all I know so it's easiest. I dont even have a pseudocode idea of what im going to do either so if someone knows the best sequence of actions, im all ears. I guess I am looking for a lot of help, I am open to any suggestions. Many many many thanks to anyone who can help. B.

    Read the article

  • Mercurial "server"

    - by user85116
    I've been using mercurial for a little while, but mainly for my own usage. Now though, I have a project I'm working on where two of us are building the same project, and we will probably be modifiying each other's files. I would like to setup a mercurial repo on a server, make that repo the "server", so my changes and the other editor's changes both push to that server (so basically the subversion / cvs model); I like mercurial though, and don't want to switch to something like subversion. Here in my own network, everything is done on linux, and my "server" has openssh installed. So pushing my changes (I work on multiple computers) from one computer to the server is just a matter of "hg push"; the protocol used is ssh for transfering the changes. The problem is that I use linux, the server will be windows (so no openssh, right?) and the other editor will be using windows too. As far as I know, the best way of working in mercurial in these types of setups is for the repo to pull changes from the source, rather then the source pushing to the "server". I'm behind several firewall's (not entirely my network) and my computer won't be visible from the server, and I'm assuming the other editor will be behind a firewall too (so we can't just start up the local mercurial http server and get the "server" computer to pull from that). What's the best way for both editors to get our changes to the server repo? (I should add that the server is a server on the internet, so just as visible as something like google.com. It's a hosted windows server, but I would probably have permission to install software if needed for this)

    Read the article

  • Database tables with dynamic information

    - by Tim Fennis
    I've googled this and found that it's almost impossible to create a database with dynamic collumns. I'll explain my problem first. I am making a webshop for a customer. It has multiple computer products for sale. CPU's HDD's RAM ect. All these products have different properties, a CPU has an FSB, RAM has a CAS latency. But this is very inconvenient because my orders table needs foreign keys to different tables which is impossible. An other option is to store all the product specific information in a varchar or blob field and let PHP figure it out. The problem with this solution is that the website needs a PC builder. A step-by-step guide to building your PC. So for instance if a customer decides he wants a new "i7 920" or whatever i want to be able to sellect all motherboards for socket 1366, which is impossible because all the data is stored in one field. I know it's possible to select all motherboards form the DB and let PHP figure out which ones are for socket 1366 but i was wondering, is there a better solution?

    Read the article

  • Using data.table to aggregate

    - by dayne
    After multiple suggestions from SO users, I am finally trying to convert my code over to using data.tables. library(data.table) DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10) > DT plate id val 1: plate1 CTRL 1 2: plate1 CTRL 2 3: plate1 ID1 3 4: plate1 ID2 4 5: plate1 ID3 5 6: plate2 CTRL 6 7: plate2 CTRL 7 8: plate2 ID1 8 9: plate2 ID2 9 10: plate2 ID3 10 What I would like to do is take the average of DT[,val] by plate when the id is "CTRL". I would normally aggregate the data frame, then use match to map the values back to a new column, 'ctrl'. Using the data.table package I can get: DT[id=="CTRL",ctrl:=mean(val),by=plate] > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 NA 4: plate1 ID2 4 NA 5: plate1 ID3 5 NA 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 NA 9: plate2 ID2 9 NA 10: plate2 ID3 10 NA What I need is really: DT <- data.table(plate = paste0("plate",rep(1:2,each=5)), id = rep(c("CTRL","CTRL","ID1","ID2","ID3"),2), val = 1:10, ctrl = rep(c(1.5,6.5),each=5)) > DT plate id val ctrl 1: plate1 CTRL 1 1.5 2: plate1 CTRL 2 1.5 3: plate1 ID1 3 1.5 4: plate1 ID2 4 1.5 5: plate1 ID3 5 1.5 6: plate2 CTRL 6 6.5 7: plate2 CTRL 7 6.5 8: plate2 ID1 8 6.5 9: plate2 ID2 9 6.5 10: plate2 ID3 10 6.5 Eventually I would like to use much more complicated selections of the values, but I do not know how to select specific values, run some function, then map those values back to the appropriate row using data frames.

    Read the article

  • How to multi-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Foreign-key-like merge in R

    - by skyl
    I'm merging a bunch of csv with 1 row per id/pk/seqn. > full = merge(demo, lab13am, by="seqn", all=TRUE) > full = merge(full, cdq, by="seqn", all=TRUE) > full = merge(full, mcq, by="seqn", all=TRUE) > full = merge(full, cfq, by="seqn", all=TRUE) > full = merge(full, diq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 9965 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 11.00 19.00 29.73 48.00 85.00 Everything is great. But, I have another file which has multiple rows per id like: "seqn","rxd030","rxd240b","nhcode","rxq250" 56,2,"","",NA,NA,"" 57,1,"ACETAMINOPHEN","01200",2 57,1,"BUDESONIDE","08800",1 58,1,"99999","",NA 57 has two rows. So, if I naively try to merge this file, I have a ton more rows and my data gets all skewed up. > full = merge(full, rxq, by="seqn", all=TRUE) > print(length(full$ridageyr)) [1] 15643 > print(summary(full$ridageyr)) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00 14.00 41.00 40.28 66.00 85.00 Is there a normal idiomatic way to deal with data like this? Suppose I want a way to make a simple model like MYSPECIAL_FACTOR <- somehow() glm(MYSPECIAL_FACTOR ~ full$ridageyr, family=binomial) where MYSPECIAL_FACTOR is, say, whether or not rxd240b == "ACETAMINOPHEN" for the observations which are unique by seqn. You can reproduce by running the first bit of this.

    Read the article

  • Need a set based solution to group rows

    - by KM
    I need to group a set of rows based on the Category column, and also limit the combined rows based on the SUM(Number) column to be less than or equal to the @Limit value. For each distinct Category column I need to identify "buckets" that are <=@limit. If the SUM(Number) of all the rows for a Category column are <=@Limit then there will be only 1 bucket for that Category value (like 'CCCC' in the sample data). However if the SUM(Number)@limit, then there will be multiple bucket rows for that Category value (like 'AAAA' in the sample data), and each bucket must be <=@Limit. There can be as many buckets as necessary. Also, look at Category value 'DDDD', its one row is greater than @Limit all by itself, and gets split into two rows in the result set. Given this simplified data: DECLARE @Detail table (DetailID int primary key, Category char(4), Number int) SET NOCOUNT ON INSERT @Detail VALUES ( 1, 'AAAA',100) INSERT @Detail VALUES ( 2, 'AAAA', 50) INSERT @Detail VALUES ( 3, 'AAAA',300) INSERT @Detail VALUES ( 4, 'AAAA',200) INSERT @Detail VALUES ( 5, 'BBBB',500) INSERT @Detail VALUES ( 6, 'CCCC',200) INSERT @Detail VALUES ( 7, 'CCCC',100) INSERT @Detail VALUES ( 8, 'CCCC', 50) INSERT @Detail VALUES ( 9, 'DDDD',800) INSERT @Detail VALUES (10, 'EEEE',100) SET NOCOUNT OFF DECLARE @Limit int SET @Limit=500 I need one of these result set: DetailID Bucket | DetailID Category Bucket -------- ------ | -------- -------- ------ 1 1 | 1 'AAAA' 1 2 1 | 2 'AAAA' 1 3 1 | 3 'AAAA' 1 4 2 | 4 'AAAA' 2 5 3 OR 5 'BBBB' 1 6 4 | 6 'CCCC' 1 7 4 | 7 'CCCC' 1 8 4 | 8 'CCCC' 1 9 5 | 9 'DDDD' 1 9 6 | 9 'DDDD' 2 10 7 | 10 'EEEE' 1

    Read the article

  • UIDs for data objects in MySQL

    - by Callash
    Hi there, I am using C++ and MySQL. I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID? Here is what I came up with: 1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data. 2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other. 3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition. Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers. Any ideas?

    Read the article

  • PHP session token can be used multipletimes?

    - by kornesh
    I got page A which is a normal HTML page and page which is an AJAX response page. And I want to prevent CSRF attacks by tokens. Lets say I use this method for an autocomplete form, is it possible to use same token multiple times (of course the session is only set one time) because i tired this method but the validation keep failing after the first suggestion (obviously the token has changed, somehow) page A <?php session_start(); $token = md5(uniqid(rand(), TRUE)); $_SESSION['token'] = $token; ?> <input id="token" value="<?php echo $token; ?>" type="hidden"></input> <input id="autocomplete" placeholder="Type something"></input> .... The form is autosubmitted every time theres a change using Jquery. page B <?php session_start(); if($_REQUEST['token'] == $_SESSION['token']){ echo 'Im working fine'; } ?>

    Read the article

  • Add Custom Control to DataGridViewCell

    - by Kovscer
    Hello, I create a custom control inherited from Windows.System.Forms.Controls. This is my code of this control: public partial class MonthEventComponent : Control { private Color couleur; private Label labelEvenement; public MonthEventComponent(Color couleur_c, String labelEvenement_c ) { InitializeComponent(); this.couleur = couleur_c; this.labelEvenement.Text = labelEvenement_c; this.labelEvenement.ForeColor = couleur; this.labelEvenement.BackColor = Color.White; this.labelEvenement.TextAlign = ContentAlignment.MiddleLeft; this.labelEvenement.Dock = DockStyle.Fill; this.Controls.Add(labelEvenement); } public MonthEventComponent() { InitializeComponent(); this.couleur = Color.Black; this.labelEvenement = new Label(); this.labelEvenement.ForeColor = couleur; this.labelEvenement.BackColor = Color.White; this.labelEvenement.Text = "Evénement Initialiser"; this.labelEvenement.TextAlign = ContentAlignment.MiddleLeft; this.labelEvenement.Dock = DockStyle.Fill; this.Controls.Add(labelEvenement); } protected override void OnClick(EventArgs e) { base.OnClick(e); MessageBox.Show("Click"); } } I would like to insert this control or multiple of this control on a DataGridViewCell but i don't know how to do this. Thank you in advance for your answer, Best Regards, PS: I'm french, i'm apologize for any can of language errors.

    Read the article

  • Rails: bi-directional has_many :through relationship

    - by Chris
    I have three models in a Rails application: Game represents an instance of a game being played. Player represents an instance of a participant in a game. User represents a registered person who can participate in games. Each Game can have many Players, and each User can have many Players (a single person can participate in multiple games at once); but each Player is in precisely one Game, and represents precisely one User. Hence, my relationships are as follows at present. class Game has_many :players end class User has_many :players end class Player belongs_to :game belongs_to :user end ... where naturally the players table has game_id and user_id columns, but games and users have no foreign keys. I would also like to represent the fact that each Game has many Users playing in it; and each User has many Games in which they are playing. How do I do this? Is it enough to add class Game has_many :users, :through => :players end class User has_many :games, :through => :players end

    Read the article

  • MS Access PIVOT with User Defined Field

    - by user2535359
    Any of you good souls please help!! I need to query the source table shown in the below. (NULL are blank fields) UNUM, Ticket, Overflow 1 , 135 , NULL 1 , 136 ,NULL 1, 137, NULL 1, 138, NULL 1, NULL, 2b 2, 135, NULL 2, 136, NULL 2, 137, NULL 3, 135, NULL 3, 136, NULL 3, 137,NULL 3, 138, NULL 3, 139, NULL 3, 140, NULL 3, NULL, 66a 4, NULL, 12a 5, NULL, 14a I need to generate the output as shown below. UserNum, Ticket1, Ticket2, Ticket3, Ticket4, Ticket5, Ticket6, Ticket7, Ticket8, Ticket9, Overflow 1, 135, 136, 137, 138, Null, Null, Null, Null, Null, 2b 2, 135, 136, 137, Null, Null, Null, Null, Null, Null, Null 3, 135, 136, 137, 138, 139, 140, Null, Null, Null, 66a 4, Null, Null, Null, Null, Null, Null, Null, Null, Null, 12a 5, Null, Null, Null, Null, Null, Null, Null, Null, Null, 14a The source table has multiple tickets assigned to user. There are always maximum of 9 tickets. The user either has a ticket or an overflow but here can be only overflow per user. I am having issue pivoting the data in Ticket column to pre-defined field names like Ticket1, Ticket2...

    Read the article

  • Designing for varying mobile device resolutions, i.e. iPhone 4 & iPhone 3G

    - by Josh
    As the design community moves to design applications & interfaces for mobile devices, a new problem has arisen: Varying Screen DPI's. Here's the situation: Touch: * iPhone 3G/S ~ 160 dpi * iPhone 4 ~ 300 dpi * iPad ~ 126 dpi * Android device @ 480p ~ 200 dpi Point / click: * Laptop @ 720p ~ 96 dpi * Desktop @ 720p ~ 72 dpi There is certainly a clear distinction between desktop and mobile so having two separate front-ends to the same app is logical, especially when considering one is "touch"-based and the other is "point/click"-based. The challenge lies in designing static graphical elements that will scale between, say, 160 dpi and 300+ dpi, and get consistent and clean design across zoom levels. Any thoughts on how to approach this? Here are some scenarios, but each has drawbacks as well: * Design a single set of assets (high resolution), then adjust zoom levels based on detected resolution / device o Drawbacks: Performance caused by code layering, varying device support of Zoom * Develop & optimize multiple variations of image and CSS assets, then hide / show each based on device o Drawbacks: Extra work in design & QA. Anyone have thoughts or experience on how to deal with this? We should certainly be looking at methods that use / support HTML5 and CSS3.

    Read the article

  • What is my error in a map in java?

    - by amveg
    Hello everyone I am trying to solve this problem: http://www.cstutoringcenter.com/problems/problems.php?id=4, but I cant figure out why my code doesnt solve this, I mean in the "for" how can I can multiply the letters? what is my error?, It just tell always 7, but I want to multiple all the letters, I hope you can help me enter code here public class ejercicio3 { public static void main(String args[]) { Map<Character, Integer> telefono = new HashMap<Character, Integer>(); telefono.put('A', 2); telefono.put('B', 2); telefono.put('C', 2); telefono.put('D', 3); telefono.put('E', 3); telefono.put('F', 3); telefono.put('G', 4); telefono.put('H', 4); telefono.put('I', 4); telefono.put('J', 5); telefono.put('K', 5); telefono.put('L', 5); telefono.put('M', 6); telefono.put('N', 6); telefono.put('O', 6); telefono.put('P', 7); telefono.put('R', 7); telefono.put('S', 7); telefono.put('T', 8); telefono.put('U', 8); telefono.put('V', 8); telefono.put('W', 9); telefono.put('X', 9); telefono.put('Y', 9); String mensaje = "Practice"; int producto = 1; for (char c : mensaje.toCharArray()) { if (telefono.containsKey(c)) { producto = telefono.get(c) * producto; System.out.println(producto); } } } }

    Read the article

  • Migrate from Oracle to MySQL

    - by Cassy
    Hi together. We ran into serious performance problems with our Oracle database and we would like to try to migrate to a MySQL-based database (either MySQL directly or, more preferrable, Infobright). The thing is, we need to let the old and the new system overlap for at least some weeks if not months, before we actually know, if all features of the new database match our needs. So, here is our situation: The Oracle database consists of multiple tables with each millions of rows. During the day, there are literally thousands of statements, which we cannot stop for migration. Every morning, new data is imported into the Oracle database, replacing some thousands of rows. Copying this process is not a problem, so we could, in theory, import in both databases in parallel. But, and here lies the challenge, for this to work, we need to have an export from the Oracle database with a consistent state from one day. (We cannot export some tables on Monday and some others on Tuesday, etc.) This means, that at least the export should be finished in less than one day. Our first thought was to dump the schema, but I wasn't able to find a tool to import an Oracle dump file into mysql. Exporting tables in CSV files might work, but I'm afraid it could take too long. So my question now is: What should I do? Is there any tool to import Oracle dump files into MySQL? Does anybody have any experience with such a large-scale migration? Thanks in advance, Cassy PS: Please, don't suggest performance optimization techniques for Oracle, we already tried a lot :-)

    Read the article

  • Create table class as a singleton

    - by Mark
    I got a class that I use as a table. This class got an array of 16 row classes. These row classes all have 6 double variables. The values of these rows are set once and never change. Would it be a good practice to make this table a singleton? The advantage is that it cost less memory, but the table will be called from multiple threads so I have to synchronize my code which way cause a bit slower application. However lookups in this table are probably a very small portion of the total code that is executed. EDIT: This is my code, are there better ways to do this or is this a good practice? Removed synchronized keyword according to recommendations in this question. final class HalfTimeTable { private HalfTimeRow[] table = new HalfTimeRow[16]; private static final HalfTimeTable instance = new HalfTimeTable(); private HalfTimeTable() { if (instance != null) { throw new IllegalStateException("Already instantiated"); } table[0] = new HalfTimeRow(4.0, 1.2599, 0.5050, 1.5, 1.7435, 0.1911); table[1] = new HalfTimeRow(8.0, 1.0000, 0.6514, 3.0, 1.3838, 0.4295); //etc } @Override @Deprecated public Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); } public static HalfTimeTable getInstance() { return instance; } public HalfTimeRow getRow(int rownumber) { return table[rownumber]; } }

    Read the article

  • c# How to implement a collection of generics

    - by Amy
    I have a worker class that does stuff with a collection of objects. I need each of those objects to have two properties, one has an unknown type and one has to be a number. I wanted to use an interface so that I could have multiple item classes that allowed for other properties but were forced to have the PropA and PropB that the worker class requires. This is the code I have so far, which seemed to be OK until I tried to use it. A list of MyItem is not allowed to be passed as a list of IItem even though MyItem implements IItem. This is where I got confused. Also, if possible, it would be great if when instantiating the worker class I don't need to pass in the T, instead it would know what T is based on the type of PropA. Can someone help get me sorted out? Thanks! public interface IItem<T> { T PropA { get; set; } decimal PropB { get; set; } } public class MyItem : IItem<string> { public string PropA { get; set; } public decimal PropB { get; set; } } public class WorkerClass<T> { private List<T> _list; public WorkerClass(IEnumerable<IItem<T>> items) { doStuff(items); } public T ReturnAnItem() { return _list[0]; } private void doStuff(IEnumerable<IItem<T>> items) { foreach (IItem<T> item in items) { _list.Add(item.PropA); } } } public void usage() { IEnumerable<MyItem> list= GetItems(); var worker = new WorkerClass<string>(list);//Not Allowed }

    Read the article

  • jQuery toggle() with unknown initial state

    - by Jason Morhardt
    I have a project that I am working on that uses a little image to mark a record as a favorite on multiple rows in a table. The data gets pulled from a DB and the image is based on whether or not that item is a favorite. One image for a favorite, a different image if not a favorite. I want the user to be able to toggle the image and make it a favorite or not. Here's my code: $(function () { $('.FavoriteToggle').toggle( function () { $(this).find("img").attr({src:"../../images/icons/favorite.png"}); var ListText = $(this).find('.FavoriteToggleIcon').attr("title"); var ListID = ListText.match(/\d+/); $.ajax({ url: "include/AJAX.inc.php", type: "GET", data: "action=favorite&ItemType=0&ItemID=" + ListID, success: function () {} }); }, function () { $(this).find("img").attr({src:"../../images/icons/favorite_not.png"}); var ListText = $(this).find('.FavoriteToggleIcon').attr("title"); var ListID = ListText.match(/\d+/); $.ajax({ url: "include/AJAX.inc.php", type: "GET", data: "action=favorite&ItemType=0&ItemID=" + ListID, success: function () {} }); } ); }); Works great if the initial state is not a favorite. But you have to double click to get the image to change if it IS a favorite initially. This causes the AJAX to fire twice and essentially make it a favorite then not a favorite before the image responds. The user thinks he's made it a favorite because the image changed, but in fact, it's not. Help anybody?

    Read the article

  • Testing Hibernate DAO, without building the universe around it.

    - by Varun Mehta
    We have an application built using spring/Hibernate/MySQL, now we want to test the DAO layer, but here are a few shortcomings we face. Consider the use case of multiple objects connected to one another, eg: Book has Pages. The Page object cannot exist without the Book as book_id is mandatory FK in Page. For testing a Page I have to create a Book. This simple usecase is easy to manage, but if you start building a Library, till you don't create the whole universe surrounding the Book and Page, you cannot test it! So to test Page; Create Library Create Section Create Genre Create Author Create Book Create Page Now test Page. Is there an easy way to by pass this "universe creation" and just test he page object in isolation. I also want to be able to test HQLs related to Page. eg: SELECT new com.test.BookPage (book.id, page.name) FROM Book book, Page page. JUnit is supposed to run in isolation, so I have to write the whole test case to create the Page. Any tips will be useful.

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • Java: Efficient Equivalent to Removing while Iterating a Collection

    - by Claudiu
    Hello everyone. We all know you can't do this: for (Object i : l) if (condition(i)) l.remove(i); ConcurrentModificationException etc... this apparently works sometimes, but not always. Here's some specific code: public static void main(String[] args) { Collection<Integer> l = new ArrayList<Integer>(); for (int i=0; i < 10; ++i) { l.add(new Integer(4)); l.add(new Integer(5)); l.add(new Integer(6)); } for (Integer i : l) { if (i.intValue() == 5) l.remove(i); } System.out.println(l); } This, of course, results in: Exception in thread "main" java.util.ConcurrentModificationException ...even though multiple threads aren't doing it... Anyway. What's the best solution to this problem? "Best" here means most time and space efficient (I realize you can't always have both!) I'm also using an arbitrary Collection here, not necessarily an ArrayList, so you can't rely on get.

    Read the article

  • counter_cache not updating on the model after save

    - by sehnsucht
    I am using a counter_cache to let MySQL do some of the bookkeeping for me: class Container has_many :items end class Item belongs_to :container, :counter_cache => true end Now, if I do this: container = Container.find(57) item = Item.new item.container = container item.save in the SQL log there will be an INSERT followed by something like: UPDATE `containers` SET `items_count` = COALESCE(`items_count`, 0) + 1 WHERE `containers`.`id` = 57 which is what I expected it to do. However, the container[:items_count] will be stale! ...unless I container.reload to pick up the updated value. Which in my mind sort of defeats part of the purpose of using the :counter_cache in favor of a custom built one, especially since I may not actually want a reload before I try to access the items_count attribute. (My models are pretty code-heavy because of the nature of the domain logic, so I sometimes have to save and create multiple things in one controller call.) I understand I can tinker with callbacks myself but this seems to me a fairly basic expectation of the simple feature. Again, if I have to write additional code to make it fully work, it might as well be easier to implement a custom counter. What am I doing/assuming wrong?

    Read the article

  • How do you deal with breaking changes in a Rails migration?

    - by Adam Lassek
    Let's say I'm starting out with this model: class Location < ActiveRecord::Base attr_accessible :company_name, :location_name end Now I want to refactor one of the values into an associated model. class CreateCompanies < ActiveRecord::Migration def self.up create_table :companies do |t| t.string :name, :null => false t.timestamps end add_column :locations, :company_id, :integer, :null => false end def self.down drop_table :companies remove_column :locations, :company_id end end class Location < ActiveRecord::Base attr_accessible :location_name belongs_to :company end class Company < ActiveRecord::Base has_many :locations end This all works fine during development, since I'm doing everything a step at a time; but if I try deploying this to my staging environment, I run into trouble. The problem is that since my code has already changed to reflect the migration, it causes the environment to crash when it attempts to run the migration. Has anyone else dealt with this problem? Am I resigned to splitting my deployment up into multiple steps?

    Read the article

  • State in OpenGL

    - by newprogrammer
    This is some simple code that draws to the screen. GLuint vbo; glGenBuffers(1, &vbo); glUseProgram(myProgram); glBindBuffer(GL_ARRAY_BUFFER, vbo); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); //Fill up my VBO with vertex data glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW); /*Draw to the screen*/ This works fine. However, I tried changing the order of some GL calls like so: GLuint vbo; glGenBuffers(1, &vbo); glUseProgram(myProgram); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); //Now comes after the setting of the vertex attributes. glBindBuffer(GL_ARRAY_BUFFER, vbo); //Fill up my VBO with vertex data glBufferData(GL_ARRAY_BUFFER, sizeof(vertexes), &vertexes, GL_STATIC_DRAW); /*Draw to the screen*/ This crashes my program. Why does there need to be a VBO bound to GL_ARRAY_BUFFER while I'm just setting up vertex attributes? To me, what glVertexAttribPointer does is just set up the format of vertexes that OpenGL will eventually use to draw things. It is not specific to any VBO. Thus, if multiple VBOs wanted to use the same vertex format, you would not need to format the vertexes in the VBO again.

    Read the article

  • Android - How to decide wether to run a Service in a separate Process?

    - by pableu
    I am working on an Android application that collects sensor data over the course of multiple hours. For that, we have a Service that collects the Sensor Data (e.g. Acceleration, GPS, ..), does some processing and stores them remotely on a server. Currently, this Service runs in a separate process (using android:service=":background" in the manifest). This complicates the communication between the Activities and the Service, but my predecessors created the Application this way because they thought that separating the Service from the Activities would make it more stable. I would like some more factual reasons for the effort of running a separate process. What are the advantages? Does it really run more stable? Is the Service less likely to be killed by the OS (to free up resources) if it's in a separate process? Our Application uses startForeground() and friends to minimize the chance of getting killed by the OS. The Android docs are not very specific about this, the mostly state that it depends on the Application's purpose ;-) TL;DR What are objective reasons to put a long-running Service in a separate process (in Android)?

    Read the article

< Previous Page | 711 712 713 714 715 716 717 718 719 720 721 722  | Next Page >