Search Results

Search found 9182 results on 368 pages for 'import hooks'.

Page 325/368 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • How to assign a table view controller to a table view on iPhone?

    - by Tattat
    I have a CharTableController, View and TableView on my IB. The CharTableController is using "CharTableController" in class identity. And this is how I implemented in the .h, and I want use the "CharTableController" to control the table only: @interface CharTableController : UITableViewController <UITableViewDelegate, UITableViewDataSource>{ // IBOutlet UILabel *debugLabel; NSArray *listData; } //@property (nonatomic, retain) IBOutlet UILabel *debugLabel; @property (nonatomic, retain) NSArray *listData; @end It is the .m: #import "CharTableController.h" @implementation CharTableController @synthesize listData; - (void)viewDidLoad { //NSLog(@"bulll"); // [debugLabel setText:@"success"]; NSArray *array = [[NSArray alloc] initWithObjects:@"A", @"B", @"C", @"D", @"E", nil]; self.listData = array; [array release]; [super viewDidLoad]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.listData count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *SimpleTableIdentifier = @"SimpleTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier: SimpleTableIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:SimpleTableIdentifier] autorelease]; NSUInteger row = [indexPath row]; cell.textLabel.text = [listData objectAtIndex:row]; } return cell; } @end In the IB, I already assign the CharTableController's view to Table View in IB. And Table View is under the View, in IB. After I run the program, I can see the table view, but I can't see any char in the table view, why? What's wrong with my code? thz.

    Read the article

  • Loading Native library to external Package in Eclipse not working. is it a Bug?

    - by TacB0sS
    I was about to report a but to Eclipse, but I was thinking to give this a chance here first: If I add an external package, the application cannot find the referenced native library, except in the case specified at the below: If my workspace consists of a single project, and I import an external package 'EX_package.jar' from a folder outside of the project folder, I can assign a folder to the native library location via: mouse over package - right click - properties - Native Library - Enter your folder. This does not work. In runtime the application does not load the library, System.mapLibraryName(Path) also does not work. Further more, if I create a User Library, and add the package to it and define a folder for the native library it still does not. If it works for you then I have a major bug since it does not work on my computer I test this in any combination I could think of, including adding the path to the windows PATH parameter, and so many other ways I can't even start to remember, nothing worked, I played with this for hours and had a colleague try to assist me, but we both came up empty. Further more, if I have a main project that is dependent on few other projects in my workspace, and they all need to use the same 'EX_package.jar' I MUST supply a HARD COPY INTO EACH OF THEM, it will ONLY (I can't stress the ONLYNESS, I got freaked out by this) work if I have a hard copy of the package in ALL of the project folders that the main project has a dependency on, and ONLY if I configure the Native path in each of them!! This also didn't do the trick. please tell me there is a solution to this, this drives me nuts... Thanks, Adam Zehavi.

    Read the article

  • strange chi-square result using scikit_learn with feature matrix

    - by user963386
    I am using scikit learn to calculate the basic chi-square statistics(sklearn.feature_selection.chi2(X, y)): def chi_square(feat,target): """ """ from sklearn.feature_selection import chi2 ch,pval = chi2(feat,target) return ch,pval chisq,p = chi_square(feat_mat,target_sc) print(chisq) print("**********************") print(p) I have 1500 samples,45 features,4 classes. The input is a feature matrix with 1500x45 and a target array with 1500 components. The feature matrix is not sparse. When I run the program and I print the arrray "chisq" with 45 components, I can see that the component 13 has a negative value and p = 1. How is it possible? Or what does it mean or what is the big mistake that I am doing? I am attaching the printouts of chisq and p: [ 9.17099260e-01 3.77439701e+00 5.35004211e+01 2.17843312e+03 4.27047184e+04 2.23204883e+01 6.49985540e-01 2.02132664e-01 1.57324454e-03 2.16322638e-01 1.85592258e+00 5.70455805e+00 1.34911126e-02 -1.71834753e+01 1.05112366e+00 3.07383691e-01 5.55694752e-02 7.52801686e-01 9.74807972e-01 9.30619466e-02 4.52669897e-02 1.08348058e-01 9.88146259e-03 2.26292358e-01 5.08579194e-02 4.46232554e-02 1.22740419e-02 6.84545170e-02 6.71339545e-03 1.33252061e-02 1.69296016e-02 3.81318236e-02 4.74945604e-02 1.59313146e-01 9.73037448e-03 9.95771327e-03 6.93777954e-02 3.87738690e-02 1.53693158e-01 9.24603716e-04 1.22473138e-01 2.73347277e-01 1.69060817e-02 1.10868365e-02 8.62029628e+00] ********************** [ 8.21299526e-01 2.86878266e-01 1.43400668e-11 0.00000000e+00 0.00000000e+00 5.59436980e-05 8.84899894e-01 9.77244281e-01 9.99983411e-01 9.74912223e-01 6.02841813e-01 1.26903019e-01 9.99584918e-01 1.00000000e+00 7.88884155e-01 9.58633878e-01 9.96573548e-01 8.60719653e-01 8.07347364e-01 9.92656816e-01 9.97473024e-01 9.90817144e-01 9.99739526e-01 9.73237195e-01 9.96995722e-01 9.97526259e-01 9.99639669e-01 9.95333185e-01 9.99853998e-01 9.99592531e-01 9.99417113e-01 9.98042114e-01 9.97286030e-01 9.83873717e-01 9.99745466e-01 9.99736512e-01 9.95239765e-01 9.97992843e-01 9.84693908e-01 9.99992525e-01 9.89010468e-01 9.64960636e-01 9.99418323e-01 9.99690553e-01 3.47893682e-02]

    Read the article

  • TreeMap sort by value

    - by vito huang
    I'm new to java, i want to write an comparator to that will let me sort TreeMap by value instead of the default natural sorting. i tried something like this, but can't find out what went wrong: import java.util.*; class treeMap { public static void main(String[] args) { System.out.println("the main"); byValue cmp = new byValue(); Map<String, Integer> map = new TreeMap<String, Integer>(cmp); map.put("de",10); map.put("ab", 20); map.put("a",5); for (Map.Entry<String,Integer> pair: map.entrySet()) { System.out.println(pair.getKey()+":"+pair.getValue()); } } } class byValue implements Comparator<Map.Entry<String,Integer>> { public int compare(Map.Entry<String,Integer> e1, Map.Entry<String,Integer> e2) { if (e1.getValue() < e2.getValue()){ return 1; } else if (e1.getValue() == e2.getValue()) { return 0; } else { return -1; } } } I guess what am i asking is what controls what get pass to comparator function, can i get an Map.Entry pass to comparator?

    Read the article

  • Orphan IBM JVM process

    - by Nicholas Key
    Hi people, I have this issue about orphan IBM JVM process being created in the process tree: For example: C:\Program Files\IBM\WebSphere\AppServer\bin>wsadmin -lang jython -f "C:\Hello.py" Hello.py has the simple implementation: import time i = 0 while (1): i = i + 1 print "Hello World " + str(i) time.sleep(3.0) My machine has such JVM information: C:\Program Files\WebSphere\java\bin>java -verbose:sizes -version -Xmca32K RAM class segment increment -Xmco128K ROM class segment increment -Xmns0K initial new space size -Xmnx0K maximum new space size -Xms4M initial memory size -Xmos4M initial old space size -Xmox1624995K maximum old space size -Xmx1624995K memory maximum -Xmr16K remembered set size -Xlp4K large page size available large page sizes: 4K 4M -Xmso256K operating system thread stack size -Xiss2K java thread stack initial size -Xssi16K java thread stack increment -Xss256K java thread stack maximum size java version "1.6.0" Java(TM) SE Runtime Environment (build pwi3260sr6ifix-20091015_01(SR6+152211+155930+156106)) IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Windows Server 2003 x86-32 jvmwi3260sr6-20091001_43491 (JIT enabled, AOT enabled) J9VM - 20091001_043491 JIT - r9_20090902_1330ifx1 GC - 20090817_AA) JCL - 20091006_01 While the program is running, I tried to kill it and subsequently I found an orphan IBM JVM process in the process tree. Is there a way to fix this issue? Why is there an orphan process in the first place? Is there something wrong with my code? I really don't believe that my simplistic code is wrongly implemented. Any suggestions?

    Read the article

  • Why can't I include these data files in a Python distribution using distutils?

    - by froadie
    I'm writing a setup.py file for a Python project so that I can distribute it. The aim is to eventually create a .egg file, but I'm trying to get it to work first with distutils and a regular .zip. This is an eclipse pydev project and my file structure is something like this: ProjectName src somePackage module1.py module2.py ... config propsFile1.ini propsFile2.ini propsFile3.ini setup.py Here's my setup.py code so far: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'], data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] ) When I run this (with sdist as a command line parameter), a .zip file gets generated with all the python files - but the config files are not included. I thought that this code: data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] indicates that those 3 specified config files should be copied to a "config" directory in the zip distribution. Why is this code not accomplishing anything? What am I doing wrong? (I have also tried playing around with the paths of the config files... But nothing seems to help. Would Python throw an error or warning if the path was incorrect / file was not found?)

    Read the article

  • Reading same file from multiple threads in C#

    - by Gustavo Rubio
    Hi. I was googling for some advise about this and I found some links. The most obvious was this one but in the end what im wondering is how well my code is implemented. I have basically two classes. One is the Converter and the other is ConverterThread I create an instance of this Converter class that has a property ThreadNumber that tells me how many threads should be run at the same time (this is read from user) since this application will be used on multi-cpu systems (physically, like 8 cpu) so it is suppossed that this will speed up the import The Converter instance reads a file that can range from 100mb to 800mb and each line of this file is a tab-delimitted value record that is imported to another destination like a database. The ConverterThread class simply runs inside the thread (new Thread(ConverterThread.StartThread)) and has event notification so when its work is done it can notify the Converter class and then I can sum up the progress for all these threads and notify the user (in the GUI for example) about how many of these records have been imported and how many bytes have been read. It seems, however that I'm having some trouble because I get random errors about the file not being able to be read or that the sum of the progress (percentage) went above 100% which is not possible and I think that happens because threads are not being well managed and probably the information returned by the event is malformed (since it "travels" from one thread to another) Do you have any advise on better practices of implementation of threads so I can accomplish this? Thanks in advance.

    Read the article

  • Aspect-Oriented Programming in OOP world - breaking rules ?

    - by Maksim Kondratyuk
    Hi 2 all! When I worked on asp.net mvc web site project, I investigated different approaches for validation. Some of them were DataAnotation validation and Validation Block. They use attributes for setting up rules for validation. Like this: [Required] public string Name {get;set;} I was confused how this approach combines with SRP (single responsibilty principle) from OOP world. Also I don't like any business logic in business objects, I prefer "poor business objects" model, but when I decorate my business objects with validation attributes for real requirements, they become ugly (Has a lot of attributes / with localization logic and so on). Idea with attributes realy simple, but in my opinion the validation decoration should be separated from object. I'm not sure is the approach to separate validation rules to xml files or to another objects, maybe it is a solution. Another bad side of AOP - problems with unit testin such code. When I decorated some controller actions with custom attributes for example to import/export TempData between actions or initialize some required services I can't to write proper unit test for testing this actions. Do you think that attributes don't break srp or you just disregard this and think that it's simplest , is not worst way ? P.S. I read some likes articles and discussions and I just want to put things in proper order. P.P.S. sorry for my "fluent" english :=)

    Read the article

  • Difference in regex between Python and Rubular?

    - by Rosarch
    In Rubular, I have created a regular expression: (Prerequisite|Recommended): (\w|-| )* It matches the bolded: Recommended: good comfort level with computers and some of the arts. Summer. 2 credits. Prerequisite: pre-freshman standing or permission of instructor. Credit may not be applied toward engineering degree. S-U grades only. Here is a use of the regex in Python: note_re = re.compile(r'(Prerequisite|Recommended): (\w|-| )*', re.IGNORECASE) def prereqs_of_note(note): match = note_re.match(note) if not match: return None return match.group(0) Unfortunately, the code returns None instead of a match: >>> import prereqs >>> result = prereqs.prereqs_of_note("Summer. 2 credits. Prerequisite: pre-fres hman standing or permission of instructor. Credit may not be applied toward engi neering degree. S-U grades only.") >>> print result None What am I doing wrong here?

    Read the article

  • Confused as to use a class or a function: Writing XML files using lxml and Python

    - by PulpFiction
    Hi. I need to write XML files using lxml and Python. However, I can't figure out whether to use a class to do this or a function. The point being, this is the first time I am developing a proper software and deciding where and why to use a class still seems mysterious. I will illustrate my point. For example, consider the following function based code I wrote for adding a subelement to a etree root. from lxml import etree root = etree.Element('document') def createSubElement(text, tagText = ""): etree.SubElement(root, text) # How do I do this: element.text = tagText createSubElement('firstChild') createSubElement('SecondChild') As expected, the output of this is: <document> <firstChild/> <SecondChild/> </document> However as you can notice the comment, I have no idea how to do set the text variable using this approach. Is using a class the only way to solve this? And if yes, can you give me some pointers on how to achieve this?

    Read the article

  • RSA_sign and RSACryptoProvider.VerifySignature

    - by Miky D
    I'm trying to get up to speed on how to get some code that uses OpenSSL for cryptography, to play nice with another program that I'm writing in C#, using the Microsoft cryptography providers available in .NET. More to the point, I'm trying to have the C# program verify an RSA message signature generated by the OpenSSL code. The code that generates the signature looks something like this: // Code in C, using the OpenSSL RSA implementation char msgToSign[] = "Hello World"; // the message to be signed char signature[RSA_size(rsa)]; // buffer that will hold signature int slen = 0; // will contain signature size // rsa is an OpenSSL RSA context, that's loaded with the public/private key pair memset(signature, 0, sizeof(signature)); RSA_sign(NID_sha1 , (unsigned char*)msgToSign , strlen(msgToSign) , signature , &slen , rsa); // now signature contains the message signature // and can be verified using the RSA_verify counterpart // .. I would like to verify the signature in C# In C#, I would do the following: import the other side's public key into an RSACryptoServiceProvider object receive the message and it's signature try to verify the signature I've got the first two parts working (I've verified that the public key is loading properly because I managed to send an RSA encrypted text from the C# code to the OpenSSL code in C and successfully have it decrypted) In order to verify the signature in C#, I've tried using the: VerifySignature method of the RSACryptoServiceProvider but that didn't work. And digging around the internet I was only able to find some vague information pointing out that .NET uses a different method for generating the signature than OpenSSL does. So, does anybody know how to accomplish this?

    Read the article

  • Why Stream/lazy val implementation using is faster than ListBuffer one

    - by anrizal
    I coded the following implementation of lazy sieve algorithms using Stream and lazy val below : def primes(): Stream[Int] = { lazy val ps = 2 #:: sieve(3) def sieve(p: Int): Stream[Int] = { p #:: sieve( Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get) } ps } and the following implementation using (mutable) ListBuffer: import scala.collection.mutable.ListBuffer def primes(): Stream[Int] = { def sieve(p: Int, ps: ListBuffer[Int]): Stream[Int] = { p #:: { val nextprime = Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get sieve(nextprime, ps += nextprime) } } sieve(3, ListBuffer(3))} When I did primes().takeWhile(_ < 1000000).size , the first implementation is 3 times faster than the second one. What's the explanation for this ? I edited the second version: it should have been sieve(3, ListBuffer(3)) instead of sieve(3, ListBuffer()) .

    Read the article

  • IPhone SDK help with minigame

    - by Harry
    right basically what ive got is an app which is a ball and bat and its how many bounces you can achieve, it works alright but there is one problem, when the ball hits the side of the bat it throws it off course and its like the frame of the ball is bouncing in the frame of the bat, Here is my code in my mainview.m #import "MainView.h" #define kGameStateRunning 1 @implementation MainView @synthesize paddle, ball; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [[event allTouches] anyObject]; CGPoint location = [touch locationInView:touch.view]; CGPoint xLocation = CGPointMake(location.x,paddle.center.y); paddle.center = xLocation; } -(IBAction) play { pos = CGPointMake(14.0,7.0); [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(onTimer) userInfo:nil repeats:YES]; } -(void) onTimer { ball.center = CGPointMake(ball.center.x+pos.x,ball.center.y+pos.y); if(ball.center.x > 320 || ball.center.x < 0) pos.x = -pos.x; if(ball.center.y > 460 || ball.center.y < 0) pos.y = -pos.y; [self checkCollision]; } -(void) checkCollision { if(CGRectIntersectsRect(ball.frame,paddle.frame)) { pos.y = -pos.y; } } @end can anyone work out the problem here? Thanks Harry

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • EXPORT AS INSERT STATEMENTS: But in SQL Plus the line overrides 2500 characters!

    - by The chicken in the kitchen
    Hello, I have to export an Oracle table as INSERT STATEMENTS. But the INSERT STATEMENTS so generated, override 2500 characters. I am obliged to execute them in SQL Plus, so I receive an error message. This is my Oracle table: CREATE TABLE SAMPLE_TABLE ( C01 VARCHAR2 (5 BYTE) NOT NULL, C02 NUMBER (10) NOT NULL, C03 NUMBER (5) NOT NULL, C04 NUMBER (5) NOT NULL, C05 VARCHAR2 (20 BYTE) NOT NULL, c06 VARCHAR2 (200 BYTE) NOT NULL, c07 VARCHAR2 (200 BYTE) NOT NULL, c08 NUMBER (5) NOT NULL, c09 NUMBER (10) NOT NULL, c10 VARCHAR2 (80 BYTE), c11 VARCHAR2 (200 BYTE), c12 VARCHAR2 (200 BYTE), c13 VARCHAR2 (4000 BYTE), c14 VARCHAR2 (1 BYTE) DEFAULT 'N' NOT NULL, c15 CHAR (1 BYTE), c16 CHAR (1 BYTE) ); ASSUMPTIONS: a) I am OBLIGED to export table data as INSERT STATEMENTS; I am allowed to use UPDATE statements, in order to avoid the SQL*Plus error "sp2-0027 input is too long(2499 characters)"; b) I am OBLIGED to use SQL*Plus to execute the script so generated. c) Please assume that every record can contain special characters: CHR(10), CHR(13), and so on; d) I CAN'T use SQL Loader; e) I CAN'T export and then import the table: I can only add the "delta" using INSERT / UPDATE statements through SQL Plus.

    Read the article

  • Is there a way to customize how the value for a custom Model Field is displayed in a template?

    - by Jordan Reiter
    I am storing dates as an integer field in the format YYYYMMDD, where month or day is optional. I have the following function for formatting the number: def flexibledateformat(value): import datetime, re try: value = str(int(value)) except: return None match = re.match(r'(\d{4})(\d\d)(\d\d)$',str(value)) if match: year_val, month_val, day_val = [int(v) for v in match.groups()] if day_val: return datetime.datetime.strftime(datetime.date(year_val,month_val,day_val),'%b %e, %Y') elif month_val: return datetime.datetime.strftime(datetime.date(year_val,month_val,1),'%B %Y') else: return str(year_val) Which results in the following outputs: >>> flexibledateformat(20100415) 'Apr 15, 2010' >>> flexibledateformat(20100400) 'April 2010' >>> flexibledateformat(20100000) '2010' So I'm wondering if there's a function I can add under the model field class that would automatically call flexibledateformat. So if there's a record r = DataRecord(name='foo',date=20100400) when processed in the form the value would be 20100400 but when output in a template using {{ r.date }} it shows up as "April 2010". Further clarification I do normally use datetime for storing date/time values. In this specific case, I need to record non-specific dates: "x happened in 2009", "y happened sometime in June 1996". The easiest way to do this while still preserving most of the functionality of a date field, including sorting and filtering, is by using an integer in the format of yyyymmdd. That is why I am using an IntegerField instead of a DateTimeField. This is what I would like to happen: I store what I call a "Flexible Date" in a FlexibleDateField as an integer with the format yyyymmdd. I render a form that includes a FlexibleDateField, and the value remains an integer so that functions necessary for validating it and rendering it in widgets work correctly. I call it in a template, as in {{ object.flexibledate }} and it is formatted according to the flexibledateformat rules: 20100416 - April 16, 2010; 20100400 - April 2010; 20100000 - 2010. This also applies when I'm not calling it directly, such as when it's used as a header in admin (http://example.org/admin/app_name/model_name/). I'm not aware if these specific things are possible.

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Binary files printing and desired precision

    - by yCalleecharan
    Hi, I'm printing a variable say z1 which is a 1-D array containing floating point numbers to a text file so that I can import into Matlab or GNUPlot for plotting. I've heard that binary files (.dat) are smaller than .txt files. The definition that I currently use for printing to a .txt file is: void create_out_file(const char *file_name, const long double *z1, size_t z_size){ FILE *out; size_t i; if((out = _fsopen(file_name, "w+", _SH_DENYWR)) == NULL){ fprintf(stderr, "***> Open error on output file %s", file_name); exit(-1); } for(i = 0; i < z_size; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } I have three questions: Are binary files really more compact than text files?; If yes, I would like to know how to modify the above code so that I can print the values of the array z1 to a binary file. I've read that fprintf has to be replaced with fwrite. My output file say dodo.dat should contain the values of array z1 with one floating number per line. I have %.16Le up in my code but I think that %.15Le is right as I have 15 precision digits with long double. I have put a dot (.) in the width position as I believe that this allows expansion to an arbitrary field to hold the desired number. Am I right? As an example with %.16Le, I can have an output like 1.0047914240730432e-002 which gives me 16 precision digits and the width of the field has the right width to display the number correctly. Is placing a dot (.) in the width position instead of a width value a good practice? Thanks a lot...

    Read the article

  • emacs: Inferior-mode python-shell appears "lagged"

    - by Begbie00
    Hi all - I'm a Python(3.1.2)/emacs(23.2) newbie teaching myself tkinter using the pythonware tutorial found here. Relevant code is pasted below the question. Question: when I click the Hello button (which should call the say_hi function) why does the inferior python shell (i.e. the one I kicked off with C-c C-c) wait to execute the say_hi print function until I either a) click the Quit button or b) close the root widget down? When I try the same in IDLE, each click of the Hello button produces an immediate print in the IDLE python shell, even before I click Quit or close the root widget. Is there some quirk in the way emacs runs the Python shell (vs. IDLE) that causes this "lagged" behavior? I've noticed similar emacs lags vs. IDLE as I've worked through Project Euler problems, but this is the clearest example I've seen yet. FYI: I use python.el and have a relatively clean init.el... (setq python-python-command "d:/bin/python31/python") is the only line in my init.el. Thanks, Mike === Begin Code=== from tkinter import * class App: def __init__(self,master): frame = Frame(master) frame.pack() self.button = Button(frame, text="QUIT", fg="red", command=frame.quit) self.button.pack(side=LEFT) self.hi_there = Button(frame, text="Hello", command=self.say_hi) self.hi_there.pack(side=LEFT) def say_hi(self): print("hi there, everyone!") root = Tk() app = App(root) root.mainloop()

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Image drawing library for Haskell?

    - by absz
    I'm working on a Haskell program for playing spatial games: I have a graph of a bunch of "individuals" playing the Prisoner's Dilemma, but only with their immediate neighbors, and copying the strategies of the people who do best. I've reached a point where I need to draw an image of the world, and this is where I've hit problems. Two of the possible geometries are easy: if people have four or eight neighbors each, then I represent each one as a filled square (with color corresponding to strategy) and tile the plane with these. However, I also have a situation where people have six neighbors (hexagons) or three neighbors (triangles). My question, then, is: what's a good Haskell library for creating images and drawing shapes on them? I'd prefer that it create PNGs, but I'm not incredibly picky. I was originally using Graphics.GD, but it only exports bindings to functions for drawing points, lines, arcs, ellipses, and non-rotated rectangles, which is not sufficient for my purposes (unless I want to draw hexagons pixel by pixel*). I looked into using foreign import, but it's proving a bit of a hassle (partly because the polygon-drawing function requires an array of gdPoint structs), and given that my requirements may grow, it would be nice to use an in-Haskell solution and not have to muck about with the FFI (though if push comes to shove, I'm willing to do that). Any suggestions? * That is also an option, actually; any tips on how to do that would also be appreciated, though I think a library would be easier.

    Read the article

  • wxpython: adding panel to wx.Frame disables/conflicts with wx.Frame's OnPaint?!

    - by sdaau
    Hi all, I just encountered this strange situation: I found an example, where wx.Frame's OnPaint is overridden, and a circle is drawn. Funnily, as soon as I add even a single panel to the frame, the circle is not drawn anymore - in fact, OnPaint is not called at all anymore ! Can anyone explain me if this is the expected behavior, and how to correctly handle a wx.Frame's OnPaint, if the wx.Frame has child panels ? Small code example is below.. Thanks in advance for any answers, Cheers! The code: #!/usr/bin/env python # http://www.linuxquestions.org/questions/programming-9/wxwidgets-wxpython-drawing-problems-with-onpaint-event-703946/ import wx class MainWindow(wx.Frame): def __init__(self, parent, title, size=wx.DefaultSize): wx.Frame.__init__(self, parent, wx.ID_ANY, title, wx.DefaultPosition, size) self.circles = list() self.displaceX = 30 self.displaceY = 30 circlePos = (self.displaceX, self.displaceY) self.circles.append(circlePos) ## uncommenting either only first, or both of ## the commands below, causes OnPaint *not* to be called anymore! #~ self.panel = wx.Panel(self, wx.ID_ANY) #~ self.mpanelA = wx.Panel(self.panel, -1, size=(200,50)) self.Bind(wx.EVT_PAINT, self.OnPaint) def OnPaint(self, e): print "OnPaint called" dc = wx.PaintDC(self) dc.SetPen(wx.Pen(wx.BLUE)) dc.SetBrush(wx.Brush(wx.BLUE)) # Go through the list of circles to draw all of them for circle in self.circles: dc.DrawCircle(circle[0], circle[1], 10) def main(): app = wx.App() win = MainWindow(None, "Draw delayed circles", size=(620,460)) win.Show() app.MainLoop() if __name__ == "__main__": main()

    Read the article

  • preparing SMS app for Android KitKat

    - by Pmsc
    in agreement with the recent post from Android Developers http://android-developers.blogspot.pt/2013/10/getting-your-sms-apps-ready-for-kitkat.html ,I was trying to prepare my app to the new android version, but encountered a problem with the part they suggest to create a dialog to let the user set the app as the default application to handle SMS's : Android Developers Post public class ComposeSmsActivity extends Activity { @Override protected void onResume() { super.onResume(); final String myPackageName = getPackageName(); if (!Telephony.Sms.getDefaultSmsPackage(this).equals(myPackageName)) { // App is not default. // Show the "not currently set as the default SMS app" interface View viewGroup = findViewById(R.id.not_default_app); viewGroup.setVisibility(View.VISIBLE); // Set up a button that allows the user to change the default SMS app Button button = (Button) findViewById(R.id.change_default_app); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { Intent intent = new Intent(Telephony.Sms.Intents.ACTION_CHANGE_DEFAULT); intent.putExtra(Telephony.Sms.Intents.EXTRA_PACKAGE_NAME, myPackageName); startActivity(intent); } }); } else { // App is the default. // Hide the "not currently set as the default SMS app" interface View viewGroup = findViewById(R.id.not_default_app); viewGroup.setVisibility(View.GONE); } } } the code itself in pretty much straightforward, but I'm unable to access to Telephony.Sms.getDefaultSmsPackage because it says that Telephony cannot be resolved, and I can't find any import or declaration that would fix that. Can anyone please help? Thanks in advanced.

    Read the article

  • How to test for existence of a script-scoped variable in PowerShell?

    - by Damian Powell
    Is it possible to test for the existence of a script-scoped variable in PowerShell? I've been using the PowerShell Community Extensions (PSCX) but I've noticed that if you import the module while Set-PSDebug -Strict is set, an error is produced: The variable '$SCRIPT:helpCache' cannot be retrieved because it has not been set. At C:\Users\...\Modules\Pscx\Modules\GetHelp\Pscx.GetHelp.psm1:5 char:24 While investigating how I might fix this, I found this piece of code in Pscx.GetHelp.psm1: #requires -version 2.0 param([string[]]$PreCacheList) if ((!$SCRIPT:helpCache) -or $RefreshCache) { $SCRIPT:helpCache = @{} } This is pretty straight forward code; if the cache doesn't exist or needs to be refreshed, create a new, empty cache. The problem is that calling $SCRIPT:helpCache while Set-PSDebug -Strict is in force casues the error because the variable hasn't been defined yet. Ideally, we could use a Test-Variable cmdlet but such a thing doesn't exist! I thought about looking in the variable: provider but I don't know how to determine the scope of a variable. So my question is: how can I test for the existence of a variable while Set-PSDebug -Strict is in force, without causing an error?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >