Search Results

Search found 20890 results on 836 pages for 'self reference'.

Page 776/836 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • With sqlalchemy how to dynamically bind to database engine on a per-request basis

    - by Peter Hansen
    I have a Pylons-based web application which connects via Sqlalchemy (v0.5) to a Postgres database. For security, rather than follow the typical pattern of simple web apps (as seen in just about all tutorials), I'm not using a generic Postgres user (e.g. "webapp") but am requiring that users enter their own Postgres userid and password, and am using that to establish the connection. That means we get the full benefit of Postgres security. Complicating things still further, there are two separate databases to connect to. Although they're currently in the same Postgres cluster, they need to be able to move to separate hosts at a later date. We're using sqlalchemy's declarative package, though I can't see that this has any bearing on the matter. Most examples of sqlalchemy show trivial approaches such as setting up the Metadata once, at application startup, with a generic database userid and password, which is used through the web application. This is usually done with Metadata.bind = create_engine(), sometimes even at module-level in the database model files. My question is, how can we defer establishing the connections until the user has logged in, and then (of course) re-use those connections, or re-establish them using the same credentials, for each subsequent request. We have this working -- we think -- but I'm not only not certain of the safety of it, I also think it looks incredibly heavy-weight for the situation. Inside the __call__ method of the BaseController we retrieve the userid and password from the web session, call sqlalchemy create_engine() once for each database, then call a routine which calls Session.bind_mapper() repeatedly, once for each table that may be referenced on each of those connections, even though any given request usually references only one or two tables. It looks something like this: # in lib/base.py on the BaseController class def __call__(self, environ, start_response): # note: web session contains {'username': XXX, 'password': YYY} url1 = 'postgres://%(username)s:%(password)s@server1/finance' % session url2 = 'postgres://%(username)s:%(password)s@server2/staff' % session finance = create_engine(url1) staff = create_engine(url2) db_configure(staff, finance) # see below ... etc # in another file Session = scoped_session(sessionmaker()) def db_configure(staff, finance): s = Session() from db.finance import Employee, Customer, Invoice for c in [ Employee, Customer, Invoice, ]: s.bind_mapper(c, finance) from db.staff import Project, Hour for c in [ Project, Hour, ]: s.bind_mapper(c, staff) s.close() # prevents leaking connections between sessions? So the create_engine() calls occur on every request... I can see that being needed, and the Connection Pool probably caches them and does things sensibly. But calling Session.bind_mapper() once for each table, on every request? Seems like there has to be a better way. Obviously, since a desire for strong security underlies all this, we don't want any chance that a connection established for a high-security user will inadvertently be used in a later request by a low-security user.

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • How to make and use first object in Objective C to store sender tag

    - by dbonneville
    I started with a sample program that simply sets up a UIView with some buttons on it. Works great. I want to access the sender tag from each button and save it. My first thought was to save it to a global variable but I couldn't get it to work right. I thought the best way would be to create an object with one property and synthesize it so I can get or set it as needed. First, I created GlobalGem.h: #import <Foundation/Foundation.h> @interface GlobalGem : NSObject { int gemTag; } @property int gemTag; -(void) print; @end Then I created GlobalGem.m: #import "GlobalGem.h" @implementation GlobalGem @synthesize gemTag; -(void) print { NSLog(@"gemTag value is %i", gemTag); } @end The sample code I worked from doesn't do anything I can see in "main" where the program initializes. They just have their methods and are set up to handle IBOutlet actions. I want to use this object in the IBOutlet methods. But I don't know where to create the object. In the viewDidLoad, I tried this: GlobalGem *aGem = [[GlobalGem alloc]init]; [aGem print]; I added reference to the .h file of course. The print method works. In the nslog I get "gemTag value is 0". But now I want to use the instance aGem in some of the IBOutlet actions. For instance, if I put this same method in one of the outlet actions like this: - (IBAction)touchButton:(id)sender { [aGem print]; } ...I get a build error saying that "aGem is undeclared". Yes, I'm new to objective c and am very confused. Is the instance I created, aGem, in the viewDidLoad not accessible outside of that method? How would I create an instance of a Class that I can store a variable value that any of the IBOutlet methods can access? All the buttons need to be able to store their sender tag in the instance aGem and I'm stuck. As I mentioned earlier, I was able to write the sender tag to a global variable I declared in the UIView .h file that came with the program, but I ran into issues using it. Where am I off?

    Read the article

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • AVFoundation buffer comparison to a saved image

    - by user577552
    Hi, I am a long time reader, first time poster on StackOverflow, and must say it has been a great source of knowledge for me. I am trying to get to know the AVFoundation framework. What I want to do is save what the camera sees and then detect when something changes. Here is the part where I save the image to a UIImage : if (shouldSetBackgroundImage) { CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); // Create a bitmap graphics context with the sample buffer data CGContextRef context = CGBitmapContextCreate(rowBase, bufferWidth, bufferHeight, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); // Create a Quartz image from the pixel data in the bitmap graphics context CGImageRef quartzImage = CGBitmapContextCreateImage(context); // Free up the context and color space CGContextRelease(context); CGColorSpaceRelease(colorSpace); // Create an image object from the Quartz image UIImage * image = [UIImage imageWithCGImage:quartzImage]; [self setBackgroundImage:image]; NSLog(@"reference image actually set"); // Release the Quartz image CGImageRelease(quartzImage); //Signal that the image has been saved shouldSetBackgroundImage = NO; } and here is the part where I check if there is any change in the image seen by the camera : else { CGImageRef cgImage = [backgroundImage CGImage]; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef bitmapData = CGDataProviderCopyData(provider); char* data = CFDataGetBytePtr(bitmapData); if (data != NULL) { int64_t numDiffer = 0, pixelCount = 0; NSMutableArray * pointsMutable = [NSMutableArray array]; for( int row = 0; row < bufferHeight; row += 8 ) { for( int column = 0; column < bufferWidth; column += 8 ) { //we get one pixel from each source (buffer and saved image) unsigned char *pixel = rowBase + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); unsigned char *referencePixel = data + (row * bytesPerRow) + (column * BYTES_PER_PIXEL); pixelCount++; if ( !match(pixel, referencePixel, matchThreshold) ) { numDiffer++; [pointsMutable addObject:[NSValue valueWithCGPoint:CGPointMake(SCREEN_WIDTH - (column/ (float) bufferHeight)* SCREEN_WIDTH - 4.0, (row/ (float) bufferWidth)* SCREEN_HEIGHT- 4.0)]]; } } } numberOfPixelsThatDiffer = numDiffer; points = [pointsMutable copy]; } For some reason, this doesn't work, meaning that the iPhone detects almost everything as being different from the saved image, even though I set a very low threshold for detection in the match function... Do you have any idea of what I am doing wrong?

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • Adding unique objects to Core Data

    - by absolut
    I'm working on an iPhone app that gets a number of objects from a database. I'd like to store these using Core Data, but I'm having problems with my relationships. A Detail contains any number of POIs (points of interest). When I fetch a set of POI's from the server, they contain a detail ID. In order to associate the POI with the Detail (by ID), my process is as follows: Query the ManagedObjectContext for the detailID. If that detail exists, add the poi to it. If it doesn't, create the detail (it has other properties that will be populated lazily). The problem with this is performance. Performing constant queries to Core Data is slow, to the point where adding a list of 150 POI's takes a minute thanks to the multiple relationships involved. In my old model, before Core Data (various NSDictionary cache objects) this process was super fast (look up a key in a dictionary, then create it if it doesn't exist) I have more relationships than just this one, but pretty much every one has to do this check (some are many to many, and they have a real problem). Does anyone have any suggestions for how I can help this? I could perform fewer queries (by searching for a number of different ID's), but I'm not sure how much this will help. Some code: POI *poi = [NSEntityDescription insertNewObjectForEntityForName:@"POI" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; poi.POIid = [attributeDict objectForKey:kAttributeID]; poi.detailId = [attributeDict objectForKey:kAttributeDetailID]; Detail *detail = [self findDetailForID:poi.POIid]; if(detail == nil) { detail = [NSEntityDescription insertNewObjectForEntityForName:@"Detail" inManagedObjectContext:[(AppDelegate*)[UIApplication sharedApplication].delegate managedObjectContext]]; detail.title = poi.POIid; detail.subtitle = @""; detail.detailType = [attributeDict objectForKey:kAttributeType]; } -(Detail*)findDetailForID:(NSString*)detailID { NSManagedObjectContext *moc = [[UIApplication sharedApplication].delegate managedObjectContext]; NSEntityDescription *entityDescription = [NSEntityDescription entityForName:@"Detail" inManagedObjectContext:moc]; NSFetchRequest *request = [[[NSFetchRequest alloc] init] autorelease]; [request setEntity:entityDescription]; NSPredicate *predicate = [NSPredicate predicateWithFormat: @"detailid == %@", detailID]; [request setPredicate:predicate]; NSLog(@"%@", [predicate description]); NSError *error; NSArray *array = [moc executeFetchRequest:request error:&error]; if (array == nil || [array count] != 1) { // Deal with error... return nil; } return [array objectAtIndex:0]; }

    Read the article

  • iPhone: didSelectRowAtIndexPath not invoked

    - by soletan
    Hi, I know this issue being mentioned before, but resolutions there didn't apply. I'm having a UINavigationController with an embedded UITableViewController set up using IB. In IB the UITableView's delegate and dataSource are both set to my derivation of UITableViewController. This class has been added using XCode's templates for UITableViewController classes. There is no custom UITableViewCell and the table view is using default plain style with single title, only. Well, in simulator the list is rendered properly, with two elements provided by dataSource, so dataSource is linked properly. If I remove the outlet link for dataSource in IB, an empty table is rendered instead. As soon as I tap on one of these two items, it is flashing blue and the GDB encounters interruption in __forwarding__ in scope of a UITableView::_selectRowAtIndexPath. It's not reaching breakpoint set in my non-empty method didSelectRowIndexPath. I checked the arguments and method's name to exclude typos resulting in different selector. I recently didn't succeed in whether delegate is set properly, but as it is set equivalently to dataSource which is getting two elements from the same class, I expect it to be set properly. So, what's wrong? I'm running iPhone/iPad SDK 3.1.2 ... but tried with iPhone SDK 3.1 in simulator as well. EDIT: This is the code of my UITableViewController derivation: #import "LocalBrowserListController.h" #import "InstrumentDescriptor.h" @implementation LocalBrowserListController - (void)viewDidLoad { [super viewDidLoad]; [self listLocalInstruments]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)viewDidUnload { [super viewDidUnload]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [entries count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) cell.textLabel.text = [[[entries objectAtIndex:[indexPath indexAtPosition:[indexPath length] - 1]] label] retain]; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) { ... } } - (void)dealloc { [super dealloc]; } - (void) listLocalInstruments { NSMutableArray *result = [NSMutableArray arrayWithCapacity:10]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"idl"] withLabel:@"Default 1"]]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"xml"] withLabel:@"Default 2"]]; [entries release]; entries = [[NSArray alloc] initWithArray:result]; } @end

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • How do I construct a Django reverse/url using query args?

    - by Andrew Dalke
    I have URLs like http://example.com/depict?smiles=CO&width=200&height=200 (and with several other optional arguments) My urls.py contains: urlpatterns = patterns('', (r'^$', 'cansmi.index'), (r'^cansmi$', 'cansmi.cansmi'), url(r'^depict$', cyclops.django.depict, name="cyclops-depict"), I can go to that URL and get the 200x200 PNG that was constructed, so I know that part works. In my template from the "cansmi.cansmi" response I want to construct a URL for the named template "cyclops-depict" given some query parameters. I thought I could do {% url cyclops-depict smiles=input_smiles width=200 height=200 %} where "input_smiles" is an input to the template via a form submission. In this case it's the string "CO" and I thought it would create a URL like the one at top. This template fails with a TemplateSyntaxError: Caught an exception while rendering: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': u'CO', 'height': 200, 'width': 200}' not found. This is a rather common error message both here on StackOverflow and elsewhere. In every case I found, people were using them with parameters in the URL path regexp, which is not the case I have where the parameters go into the query. That means I'm doing it wrong. How do I do it right? That is, I want to construct the full URL, including path and query parameters, using something in the template. For reference, % python manage.py shell Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from django.core.urlresolvers import reverse >>> reverse("cyclops-depict", kwargs=dict()) '/depict' >>> reverse("cyclops-depict", kwargs=dict(smiles="CO")) Traceback (most recent call last): File "<console>", line 1, in <module> File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 356, in reverse *args, **kwargs))) File "/Library/Python/2.6/site-packages/django/core/urlresolvers.py", line 302, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'cyclops-depict' with arguments '()' and keyword arguments '{'smiles': 'CO'}' not found.

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • addSubview and autosizing

    - by neoneye
    How does one add views to a window, so that the views are resized to fit within the window frame? The problem I'm making a sheet window containing 2 views, where only one of them is visible at a time, so it's important that the views have the same size as the window. My problem is that either view0 fits correctly and view1 doesn't or the other way around. I can't figure out how to give them the same size as the window. Possible solution I could just make sure that both views have precisely the same size within Interface Builder, then it would work. However I'm looking for a way to do this programmatically. Screenshot of view0 Below you can see the autoresizing problem in the top and the right side, where the view is somehow clipped. Screenshot of view1 This view is resized correctly. Here is my code Can the views be resized before adding them to the window. Or is it better to do as I do now where the views are added one by one while changing the window frame. How do you do it? NSView* view0 = /* a view made with IB */; NSView* view1 = /* another view made with IB */; NSWindow* window = [self window]; NSRect window_frame = [window frame]; NSView* cv = [[[NSView alloc] initWithFrame:window_frame] autorelease]; [window setContentView:cv]; [cv setAutoresizesSubviews:YES]; // add subview so it fits within the contentview frame { NSView* v = view0; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // add subview so it fits within the contentview frame { NSView* v = view1; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // restore original window frame [window setFrame:window_frame display:YES]; [view0 setHidden:NO]; [view1 setHidden:YES];

    Read the article

  • Is Zend Framework a total waste of my time?

    - by Citizen
    Ok, I'm about 50% done with the "30 minute" quickstart guide from Zend. I must be missing something, because this seems like a total waste of time. The point of this quickguide is to create a guestbook, something I could do in 5 minutes with regular naked non-framework php. Here's my path to zend framework: c:/program files/wamp/www/_zend/ Here's my path to my quickstart project: c:/program files/wamp/www/_zend/bin/quickstart/ I have a number of questions at this point: http://framework.zend.com/docs/quickstart/create-a-model-and-database-table 1: I'm running the command line to run my database loading script. I get an error stating the it can't find the Zend/AutoLoader.php becuase my path to the zend library is wrong. I followed all of the steps. I defined the path to my zend library in the main config file, but for some reason, its defined again in my db loader. In all of these scripts that they have me load, it points the relative path to the zend library as being /../library Problem is, there's nothing in that folder. To get to my actual zend folder, you'd need to be (relatively) /../../../../library Which brings me to my 2nd question: 2: Where the #$#$ is the main Zend files supposed to be? The install directions were basically "put it wherever you want", when the real answer (after a bunch of errors and wasted time was) was "put it somewhere so that its really easy to type the full path a thousand times in command line" and "it also better be in a runnable place on your webserver since its going to create your quickstart application in a subdirectory within zend". Which brings us to the third question 3: Am I supposed to have this libary in both the parent core Zend (wamp/_zend/library) AND my application (quickstart/library)? 4: If that is the case, it seems like a ton of wasted files to be uploading. I'd like to use Zend to create products that my customers will download. 5 megs of overhead seems like a bit much. Zend claims you can use these library components separately, but it looks to me like I'm going to have to upload them every time. Which leads to the next question: 5: It appears that perhaps Zend is more for a single application that is not supposed to be distributed. Is this not the case? 6: According to their default file structure everything but my /public folder would be above public_html on my server if I wanted this to rest on my TLD. I would need to rename every reference of /public/ to /public_html/, or am I missing something else?

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • Delphi interface cast using TValue

    - by conciliator
    I've recently experimented extensively with interfaces and D2010 RTTI. I don't know at runtime the actual type of the interface; although I will have access to it's qualified name using a string. Consider the following: program rtti_sb_1; {$APPTYPE CONSOLE} uses SysUtils, Rtti, TypInfo, mynamespace in 'mynamespace.pas'; var ctx: TRttiContext; InterfaceType: TRttiType; Method: TRttiMethod; ActualParentInstance: IParent; ChildInterfaceValue: TValue; ParentInterfaceValue: TValue; begin ctx := TRttiContext.Create; // Instantiation ActualParentInstance := TChild.Create as IParent; {$define WORKAROUND} {$ifdef WORKAROUND} InterfaceType := ctx.GetType(TypeInfo(IParent)); InterfaceType := ctx.GetType(TypeInfo(IChild)); {$endif} // Fetch interface type InterfaceType := ctx.FindType('mynamespace.IParent'); // This cast is OK and ChildMethod is executed (ActualParentInstance as IChild).ChildMethod(100); // Create a TValue holding the interface TValue.Make(@ActualParentInstance, InterfaceType.Handle, ParentInterfaceValue); InterfaceType := ctx.FindType('mynamespace.IChild'); // This cast doesn't work if ParentInterfaceValue.TryCast(InterfaceType.Handle, ChildInterfaceValue) then begin Method := InterfaceType.GetMethod('ChildMethod'); if (Method <> nil) then begin Method.Invoke(ChildInterfaceValue, [100]); end; end; ReadLn; end. The contents of mynamespace.pas is as follows: {$M+} IParent = interface ['{2375F59E-D432-4D7D-8D62-768F4225FFD1}'] procedure ParentMethod(const Id: integer); end; {$M-} IChild = interface(IParent) ['{6F89487E-5BB7-42FC-A760-38DA2329E0C5}'] procedure ChildMethod(const Id: integer); end; TParent = class(TInterfacedObject, IParent) public procedure ParentMethod(const Id: integer); end; TChild = class(TParent, IChild) public procedure ChildMethod(const Id: integer); end; For completeness, the implementation goes as procedure TParent.ParentMethod(const Id: integer); begin WriteLn('ParentMethod executed. Id is ' + IntToStr(Id)); end; procedure TChild.ChildMethod(const Id: integer); begin WriteLn('ChildMethod executed. Id is ' + IntToStr(Id)); end; The reason for {$define WORKAROUND} may be found in this post. Question: is there any way for me to make the desired type cast using RTTI? In other words: is there a way for me to invoke IChild.ChildMethod from knowing 1) the qualified name of IChild as a string, and 2) a reference to the TChild instance as a IParent interface? (After all, the hard-coded cast works fine. Is this even possible?) Thanks!

    Read the article

  • Absolute reRendering using RichFaces

    - by wheelie
    Hey there, I am implementing copy/paste functionality for a complex object tree, this means you can copy an object and paste it where the object type is the same. Therefore I need to reRender the <a4j:commandLink>-s which are performing the paste action (so it will show on the GUI or not). Simplified example: Problem is that copy links are deep in the tree. How is it possible to reRender on a higher level in the component tree? (very)Simplified example: ... <h:form id="form1"> ... <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste1" value="Paste" rendered="#{myBean.myHashMap.key}" /> <a4j:outputPanel> <a4j:region renderRegionOnly="true"> <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste2" value="Paste" rendered="#{myBean.myHashMap.key}" /> </a4j:region> <a4j:outputPanel> <a4j:region renderRegionOnly="true"> <a4j:commandLink value="Copy" reRender=":paste1, :paste2, :paste3" /> <a4j:commandLink id="paste3" value="Paste" rendered="#{myBean.myHashMap.key}" /> </a4j:region> </a4j:outputPanel> </a4j:outputPanel> ... </h:form> Something like that. In practise this differs in that a rich:tree is displayed. Also, there can be multiple instances of the same paste link: object:0::paste3, object:1::paste3. private final String pasteIDs = ":xxPaste, ... , :xyPaste"; According to the RichFaces reference, putting the separator to the beginning of the ID means it is an "absolute" search expression, however this way i get the same result: only the 'local' paste link gets rerendered, the others not. Every copy-paste link pair is encapsulated in <a4j:region renderRegionOnly="true">, because it is necessary for other components to restrict the reRender to that region. Could this be blocking the reRender I want to make? Also I want to rerender exactly those paste links, so no other rerender action is triggered. Hope it is clear what i want to achieve. Any help would be appreciated! Daniel

    Read the article

  • Save File to Sharepoint Server using JAX-WS

    - by Evan Porter
    I'm trying to save a file to a Sharepoint server using JAX-WS. The web service call is reporting a success, but the file doesn't show up. I used this command (from a WinXP) to generate the Java code to make the JAX-WS call: wsimport -keep -extension -Xnocompile http://hostname/sites/teamname/_vti_bin/Copy.asmx?WSDL I get a handle on the web service which I called port using the following: CopySoap port = null; if (userName != null && password != null) { Copy service = new Copy(); port = service.getCopySoap(); ((BindingProvider) port).getRequestContext().put(BindingProvider.USERNAME_PROPERTY, userName); ((BindingProvider) port).getRequestContext().put(BindingProvider.PASSWORD_PROPERTY, password); } else { throw new Exception("Holy Frijolé! Null userName and/or password!"); } I called the web service using the following: port.copyIntoItems(sourceUrl, destUrlCollection, fields , "Contents of the file".getBytes(), copyIntoItemsResult, copyResultCollection) The sourceUrl and the only url in destUrlCollection equals "hostname/sites/teamname/Tech Docs/Sub Folder". The FieldInformationCollection object named fields contains only one FieldInformation. The FieldInformation object has "HelloWorld.txt" as the value for displayName, internalName and value. The type property is set to FieldType.FILE. The id property is set to (java.util.UUID.randomUUID()).toString(). The call to copyIntoItems returns successfuly; copyIntoItemsResult contains a value of 0 and the only CopyResult object set in copyResultCollection has an error code of "SUCCESS" with a null error message. When I look into the "Tech Docs" library on Sharepoint, in the "Sub Folder" there's no file there. Why wouldn't it tell me what I did wrong? Did I just miss a step? Update (Feb 26th, 2011) I've changed my FieldInformation object's displayName and internalName properties to be "Title" as suggested. Still no joy, but a step in the right direction. After playing around with the urls for a bit, I got these results: With both the sourceUrl and the only destination URL equivalent, with no protocol, I get the SUCCESS response but no actual document appears in the document library. With both of the URLs equivalent but with an "http://" protocol specified, I get an UNKNOWN error with "Object reference not set to an instance of an object." as the message. With the source URL an empty string or null, I get an UNKNOWN error with " Value does not fall within the expected range." as the error message.

    Read the article

  • How to optimize this mysql query - explain output included

    - by Sandeepan Nath
    This is the query (a search query basically, based on tags):- select SUM(DISTINCT(ttagrels.id_tag in (2105,2120,2151,2026,2046) )) as key_1_total_matches, td.*, u.* from Tutors_Tag_Relations AS ttagrels Join Tutor_Details AS td ON td.id_tutor = ttagrels.id_tutor JOIN Users as u on u.id_user = td.id_user where (ttagrels.id_tag in (2105,2120,2151,2026,2046)) group by td.id_tutor HAVING key_1_total_matches = 1 And following is the database dump needed to execute this query:- CREATE TABLE IF NOT EXISTS `Users` ( `id_user` int(10) unsigned NOT NULL auto_increment, `id_group` int(11) NOT NULL default '0', PRIMARY KEY (`id_user`), KEY `Users_FKIndex1` (`id_group`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=730 ; INSERT INTO `Users` (`id_user`, `id_group`) VALUES (303, 1); CREATE TABLE IF NOT EXISTS `Tutor_Details` ( `id_tutor` int(10) unsigned NOT NULL auto_increment, `id_user` int(10) NOT NULL default '0', PRIMARY KEY (`id_tutor`), KEY `Users_FKIndex1` (`id_user`), KEY `id_user` (`id_user`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=58 ; INSERT INTO `Tutor_Details` (`id_tutor`, `id_user`) VALUES (26, 303); CREATE TABLE IF NOT EXISTS `Tags` ( `id_tag` int(10) unsigned NOT NULL auto_increment, `tag` varchar(255) default NULL, PRIMARY KEY (`id_tag`), UNIQUE KEY `tag` (`tag`), KEY `id_tag` (`id_tag`), KEY `tag_2` (`tag`), KEY `tag_3` (`tag`), KEY `tag_4` (`tag`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2957 ; INSERT INTO `Tags` (`id_tag`, `tag`) VALUES (2026, 'Brendan.\nIn'), (2046, 'Brendan.'), (2105, 'Brendan'), (2120, 'Brendan''s'), (2151, 'Brendan)'); CREATE TABLE IF NOT EXISTS `Tutors_Tag_Relations` ( `id_tag` int(10) unsigned NOT NULL default '0', `id_tutor` int(10) unsigned default NULL, `tutor_field` varchar(255) default NULL, `cdate` timestamp NOT NULL default CURRENT_TIMESTAMP, `udate` timestamp NULL default NULL, KEY `Tutors_Tag_Relations` (`id_tag`), KEY `id_tutor` (`id_tutor`), KEY `id_tag` (`id_tag`), KEY `id_tutor_2` (`id_tutor`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `Tutors_Tag_Relations` (`id_tag`, `id_tutor`, `tutor_field`, `cdate`, `udate`) VALUES (2105, 26, 'firstname', '2010-06-17 17:08:45', NULL); ALTER TABLE `Tutors_Tag_Relations` ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_2` FOREIGN KEY (`id_tutor`) REFERENCES `Tutor_Details` (`id_tutor`) ON DELETE NO ACTION ON UPDATE NO ACTION, ADD CONSTRAINT `Tutors_Tag_Relations_ibfk_1` FOREIGN KEY (`id_tag`) REFERENCES `Tags` (`id_tag`) ON DELETE NO ACTION ON UPDATE NO ACTION; What the query does? This query actually searches tutors which contain "Brendan"(as their name or biography or something). The id_tags 2105,2120,2151,2026,2046 are nothing but the tags which are LIKE "%Brendan%". My question is :- 1.In the explain of this query, the reference column shows NULL for ttagrels, but there are possible keys (Tutors_Tag_Relations,id_tutor,id_tag,id_tutor_2). So, why is no key being taken. How to make the query take references. Is it possible at all? 2. The other two tables td and u are using references. Any indexing needed in those? I think not. Check the explain query output here http://www.test.examvillage.com/explain.png

    Read the article

  • Posting comments to a wordpress-blog in Android

    - by Samuh
    I am working on a module that allows users to post comments on a blog published on Wordpress. I looked at the HTML source for Post-Comment-Form displayed at the bottom of a blog entry (Leave a Reply section). Using that as a reference, I translated it to Java using DefaultHTTPClient and BasicNameValuePairs and my code looks like: DefaultHttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost("http://xycabz.wordpress.com/wp-comments-post.php"); httppost.setHeader("Content-type","application/x-www-form-urlencoded;charset=UTF-8"); List<NameValuePair> nvps = new ArrayList<NameValuePair>(); nvps.add(new BasicNameValuePair("author","abc")); nvps.add(new BasicNameValuePair("email","[email protected]")); nvps.add(new BasicNameValuePair("url","")); nvps.add(new BasicNameValuePair("comment","entiendamonos?")); nvps.add(new BasicNameValuePair("comment_post_ID","123")); //this was a hidden field and always set to 0 nvps.add(new BasicNameValuePair("comment_parent","0")); try { httppost.setEntity(new UrlEncodedFormEntity(nvps)); } catch (UnsupportedEncodingException e1) { e1.printStackTrace(); } BasicResponseHandler handler = new BasicResponseHandler(); try { Log.e("OUTPUT",httpclient.execute(httppost,handler)); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } The above code works fine when I try it out on my blog. But when I try this on the actual blog, I get HTTP 302 Found (Redirect to temporary location) exceptions in the logs. The comments never make it to the blog page. Usually, when you post a comment(on the web page) you are taken back to the blog page that enlists all the comments. The URL I am getting in the redirects is the same. Questions: 1. Could this be a post-a-comment settings problem(perhaps something the original blog owner might have set)? 2. How should my HTTPClient handle 302 status code? Eventually, I just have to notify the user of success and failure and not actually take him to the comments page.

    Read the article

  • compiling numpy with sunperf atlas libraries

    - by user288558
    I would like to use the sunperf libraries when compiling scipy and numpy. I tried using setupscons.py which seems to check from SUNPERF libraries, but it didnt recognize where mine are: here is a listing of /pkg/linux/SS12/sunstudio12.1 (thats where the sunperf library lives): wkerzend@mosura:/home/wkerzend>ls /pkg/linux/SS12/sunstudio12.1/lib/ CCios/ libdbx_agent.so@ libsunperf.so.3@ amd64/ libfcollector.so@ libtha.so@ collector.jar@ libfsu.so@ libtha.so.1@ dbxrc@ libfsu.so.1@ locale/ debugging.so@ libfui.so@ make.rules@ er.rc@ libfui.so.1@ rw7/ libblacs_openmpi.so@ librtc.so@ sse2/ libblacs_openmpi.so.1@ libscalapack.so@ stlport4/ libcollectorAPI.so@ libscalapack.so.1@ svr4.make.rules@ libcollectorAPI.so.1@ libsunperf.so@ tools_svc_mgr@ I tried to specify this directory in sites.cfg, but I still get the following errors: Checking if g77 needs dummy main - MAIN__. Checking g77 name mangling - '_', '', lower-case. Checking g77 C compatibility runtime ...-L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 - L/usr/lib/gcc/x86_64-redhat-linux/3.4.6 -L/usr/lib/gcc/x86_64-redhat- linux/3.4.6/../../../../lib64 -L/usr/lib/gcc/x86_64-redhat-linux/3.4.6/../../.. -L/lib/../lib64 -L/usr/lib/../lib64 -lfrtbegin -lg2c -lm Checking MKL ... Failed (could not check header(s) : check config.log in build/scons/scipy/integrate for more details) Checking ATLAS ... Failed (could not check header(s) : check config.log in build/scons/scipy/integrate for more details) Checking SUNPERF ... Failed (could not check symbol cblas_sgemm : check config.log in build/scons/scipy/integrate for more details)) Checking Generic BLAS ... yes Checking for BLAS (Generic BLAS) ... Failed: BLAS (Generic BLAS) test could not be linked and run Exception: Could not find F77 BLAS, needed for integrate package: File "/priv/manana1/wkerzend/install_dir/scipy-0.7.1/scipy/integrate/SConstruct", line 2: GetInitEnvironment(ARGUMENTS).DistutilsSConscript('SConscript') File "/home/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/core/numpyenv.py", line 108: build_dir = '$build_dir', src_dir = '$src_dir') File "/priv/manana1/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/scons-local/scons-local-1.2.0/SCons/Script/SConscript.py", line 549: return apply(_SConscript, [self.fs,] + files, subst_kw) File "/priv/manana1/wkerzend/python_coala/numscons-0.10.1-py2.6.egg/numscons/scons-local/scons-local-1.2.0/SCons/Script/SConscript.py", line 259: exec _file_ in call_stack[-1].globals File "/priv/manana1/wkerzend/install_dir/scipy-0.7.1/build/scons/scipy/integrate/SConscript", line 15: raise Exception("Could not find F77 BLAS, needed for integrate package") error: Error while executing scons command. See above for more information. If you think it is a problem in numscons, you can also try executing the scons command with --log-level option for more detailed output of what numscons is doing, for example --log-level=0; the lowest the level is, the more detailed the output it.----- any help is appreciated Wolfgang

    Read the article

  • Can't create file in Ada 95

    - by duder
    Hello, I'm trying to follow a standard reference for opening files but running into a constraint_error at the line when I call Ada.Text_IO.Create(). It says "range check failed". Any help appreciated, here's the code: WITH Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Text_IO; USE Ada.Integer_Text_IO; PROCEDURE FileManip IS --Variables Start_Int : Integer; Stop_Int : Integer; Max_Length : Integer; --Output File MaxName : CONSTANT Positive := 80; SUBTYPE NameRange IS Positive RANGE 1..MaxName; OutFileName : String(NameRange) := (OTHERS => '#'); OutNameLength : NameRange; OutData : File_Type; --Array TYPE Chain_Array IS ARRAY(1..500) OF Integer; Sum : Integer := 1; BEGIN --Inputs Ada.Text_IO.Put(Item => "Enter a starting Integer: "); Ada.Integer_Text_IO.Get(Item => Start_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a stopping Integer: "); Ada.Integer_Text_IO.Get(Item => Stop_Int); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a Maximum Length to search: "); Ada.Integer_Text_IO.Get(Item => Max_Length); Ada.Text_IO.New_Line; Ada.Text_IO.Put(Item => "Enter a output file name > "); Ada.Text_IO.Get_Line( Item => OutFileName, Last => OutNameLength); Ada.Text_IO.Create( File => OutData, Mode => Ada.Text_IO.Out_File, Name => OutFileName(1..OutNameLength)); Ada.Text_IO.New_Line;

    Read the article

  • hand coding a parser

    - by John Leidegren
    For all you compiler gurus, I wanna write a recursive descent parser and I wanna do it with just code. No generating lexers and parsers from some other grammar and don't tell me to read the dragon book, i'll come around to that eventually. I wanna get into the gritty details about implementing a lexer and parser for a reasonable simple langauge, say CSS. And I wanna do this right. This will probably end up being a series of questions but right now I'm starting with a lexer. Tokenization rules for CSS can be found here. I find my self writing code like this (hopefully you can infer the rest from this snippet): public CssToken ReadNext() { int val; while ((val = _reader.Read()) != -1) { var c = (char)val; switch (_stack.Top) { case ParserState.Init: if (c == ' ') { continue; // ignore } else if (c == '.') { _stack.Transition(ParserState.SubIdent, ParserState.Init); } break; case ParserState.SubIdent: if (c == '-') { _token.Append(c); } _stack.Transition(ParserState.SubNMBegin); break; What is this called? and how far off am I from something reasonable well understood? I'm trying to balence something which is fair in terms of efficiency and easy to work with, using a stack to implement some kind of state machine is working quite well, but I'm unsure how to continue like this. What I have is an input stream, from which I can read 1 character at a time. I don't do any look a head right now, I just read the character then depending on the current state try to do something with that. I'd really like to get into the mind set of writing reusable snippets of code. This Transition method is currently means to do that, it will pop the current state of the stack and then push the arguments in reverse order. That way, when I write Transition(ParserState.SubIdent, ParserState.Init) it will "call" a sub routine SubIdent which will, when complete, return to the Init state. The parser will be implemented in much the same way, currently, having everyhing in a single big method like this allows me to easily return a token when I found one, but it also forces me to keep everything in one single big method. Is there a nice way to split these tokenization rules into seperate methods? Any input/advice on the matter would be greatly appriciated!

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >