Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 828/906 | < Previous Page | 824 825 826 827 828 829 830 831 832 833 834 835  | Next Page >

  • Loading .png image from array of uint8_t into OpenGL ES texture

    - by unknownthreat
    Normally, when we want to load a texture for OpenGL ES with .png, we simply add the .png images into XCode. The .png files will be altered for optimization by XCode and these altered png files can loaded into OpenGL ES texture during the runtime. However, what I am trying to do is quite different. I am trying to load a .png file that is not from the prebuilt/compile. The png file will be transmitted externally from UDP, and it will be in the form of array of bytes. I am very sure that the png is transferred correctly, but when it comes to displaying the png image in the form of the OpenGL ES texture, the image somehow shows incorrectly. The colors that are being sent are presented but the positions are somehow very incorrect. However, the position of the colors still retain some aspects of the original position. Here: The left image shows the original .png, while the right shows the png being displayed on iPhone using OpenGL ES Texture. It looks more like the png data is not being decoded or incorrectly processed. Below is OpenGL ES code for turning the image into texture: - (void) setTextureFromImageByte: (uint8_t*)imageByte{ if (self = [super init]){ NSData* imageData = [[NSData alloc] initWithBytes: imageByte length: imageLength]; UIImage* img = [[UIImage alloc] initWithData: imageData]; CGImageRef image = img.CGImage; int width = 512; int height = 512; if (image){ int tempWidth = (int)width, tempHeight = (int)height; if ((tempWidth & (tempWidth - 1)) != 0 ){ NSLog(@"CAUTION! width is not power of 2. width == %d", tempWidth); }else if ((tempHeight & (tempHeight - 1)) != 0 ){ NSLog(@"CAUTION! height is not power of 2. height == %d", tempHeight); }else{ void *spriteData = calloc(width * 4, height * 4); CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(image), kCGImageAlphaPremultipliedLast); CGContextDrawImage(spriteContext, CGRectMake(0.0, 0.0, width, height), image); CGContextRelease(spriteContext); glBindTexture(GL_TEXTURE_2D, 1); glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 320, 435, GL_RGBA, GL_UNSIGNED_BYTE, spriteData); free(spriteData); } }else NSLog(@"ERROR: Image not loaded..."); [img release]; [imageData release]; } } So does anyone knows how to deal with this? Is it because of iPhone only accepts altered png from XCode? What can we do in this case in order to make the png image be able to display correctly?

    Read the article

  • SVN directories not showing up in localhost when using WAMP

    - by JsusSalv
    Hi: I recently installed WAMP for actual local use. I've worked on live development servers but now am working on localhost. I've managed to get multiple virtual hosts setup on my WAMP/Vista 64-bit box but am having difficulty with directories pulled from SVN. I have four vhosts setup. Two work well and they are not tied to any SVN just yet. I'm also using TortoiseSVN in case it makes any difference. However, the other projects are coming from SVN repositories. When I view these two projects I get the following error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, admin@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. The way I setup the vhosts is as follows: httpd.conf # Multiple Virtual Hosts <VirtualHost 127.0.0.1> ServerName localhost DocumentRoot "C:/wamp/www/" </VirtualHost> <VirtualHost 127.0.1.0> ServerName testone.local DocumentRoot "C:/wamp/www/root/projectone/" </VirtualHost> <VirtualHost 127.0.2.0> ServerName testtwo.local DocumentRoot "C:/wamp/www/root/projecttwo/" </VirtualHost> <VirtualHost 127.0.3.0> ServerName testthree.local DocumentRoot "C:/wamp/www/root/projectthree/" </VirtualHost> <VirtualHost 127.0.3.1> ServerName testfour.local DocumentRoot "C:/wamp/www/root/projectfour/" </VirtualHost> And here's the 'hosts' file: # Localhost 127.0.0.1 localhost ::1 localhost # Project One 127.0.1.0 testone.local # Project Two 127.0.2.0 testtwo.local # Project Three 127.0.3.0 testthree.local # Project Four 127.0.3.1 testfour.local Everything works just fine. So if you want to tell me I'm doing something wrong then by all means point out a few things. But as it stands, it works and I'm content using different IPs and/or named-based vhosts. The problem comes in not being able to see the directories and files in the projects that are tied to an SVN. Whenever I visit http://testxxxx.local I get the error message at the top of this post. Please provide some suggestions. Thank you!

    Read the article

  • Intermittent fillMode=kCAFillModeForwards bug using CAKeyframeAnimation with path

    - by Mark24x7
    I'm having an intermittent problem when I move a UIImageView around the screen using CAKeyframeAnimation. I want the position of the UIImageView to remain where the animation ends when it is done. This bug only happens for certain start and end points. When I use random points it works correctly most of the time, but about 5-15% of the time it fails and snaps back to the pre-animation position. The problem only appears when using CAKeyframeAnimation using the path property. If I use the values property the bug does not appear. I am setting removedOnCompletion = NO, and fillMode = kCAFillModeForwards. I have posted a link to a test Xcode below. Here is my code for setting up the animation. I have a property usePath. When this is YES, the bug appears. When I set usePath to NO, the snap back bug does not happen. In this case I am using a path that is a simple line, but once I resolve this bug with a simple path, I will use a more complex path with curves in it. // create the point CAKeyframeAnimation *moveAnimation = [CAKeyframeAnimation animationWithKeyPath:@"position"]; if (self.usePath) { CGMutablePathRef path = CGPathCreateMutable(); CGPathMoveToPoint(path, NULL, startPt.x, startPt.y); CGPathAddLineToPoint(path, NULL, endPt.x, endPt.y); moveAnimation.path = path; CGPathRelease(path); } else { moveAnimation.values = [NSArray arrayWithObjects: [NSValue valueWithCGPoint:startPt], [NSValue valueWithCGPoint:endPt], nil]; } moveAnimation.calculationMode = kCAAnimationPaced; moveAnimation.duration = 0.5f; moveAnimation.removedOnCompletion = NO; // leaves presentation layer in final state; preventing snap-back to original state moveAnimation.fillMode = kCAFillModeForwards; moveAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseOut]; // moveAnimation.delegate = self; // start the animation [ball.layer addAnimation:moveAnimation forKey:@"moveAnimation"]; To dl and view my test project goto test project (http://www.24x7digital.com/downloads/PathFillModeBug.zip) Tap the 'Move Ball' button to start the animation of the ball. I have hard coded a start and end point which causes the bug to happen every time. Use the switch to change usePath to YES or NO. When usePath is YES, you will see the snap back bug. When usePath is NO, you will not see the snap back bug. I'm using SDK 3.1.3, but I have seen this bug using SDK 3.0 as well, and I have seen the bug on the Sim and on my iPhone. Any idea on how to fix this or if I am doing something wrong are appreciated. Thanks, Mark.

    Read the article

  • Rails scalar query

    - by Craig
    I need to display a UI element (e.g. a star or checkmark) for employees that are 'favorites' of the current user (another employee). The Employee model has the following relationship defined to support this: has_and_belongs_to_many :favorites, :class_name => "Employee", :join_table => "favorites", :association_foreign_key => "favorite_id", :foreign_key => "employee_id" The favorites has two fields: employee_id, favorite_id. If I were to write SQL, the following query would give me the results that I want: SELECT id, account, IF( ( SELECT favorite_id FROM favorites WHERE favorite_id=p.id AND employee_id = ? ) IS NULL, FALSE, TRUE) isFavorite FROM employees Where the '?' would be replaced by the session[:user_id]. How do I represent the isFavorite scalar query in Rails? Another approach would use a query like this: SELECT id, account, IF(favorite_id IS NULL, FALSE, TRUE) isFavorite FROM employees e LEFT OUTER JOIN favorites f ON e.id=f.favorite_id AND employee_id = ? Again, the '?' is replaced by the session[:user_id] value. I've had some success writing this in Rails: ee=Employee.find(:all, :joins=>"LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=1", :select=>"employees.*,favorites.favorite_id") Unfortunately, when I try to make this query 'dynamic' by replacing the '1' with a '?', I get errors. ee=Employee.find(:all, :joins=>["LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?",1], :select=>"employees.*,favorites.favorite_id") Obviously, I have the syntax wrong, but can :joins expressions be 'dynamic'? Is this a case for a Lambda expression? I do hope to add other filters to this query and use it with will_paginate and acts_as_taggable_on, if that makes a difference. edit errors from trying to make :joins dynamic: ActiveRecord::ConfigurationError: Association named 'LEFT OUTER JOIN favorites ON employees.id=favorites.favorite_id AND favorites.employee_id=?' was not found; perhaps you misspelled it? from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1906:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1911:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `each' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1910:in `build' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/associations.rb:1830:in `initialize' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `new' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1789:in `add_joins!' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1686:in `construct_finder_sql' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:1548:in `find_every' from /Users/craibuc/.gem/ruby/1.8/gems/activerecord-2.3.5/lib/active_record/base.rb:615:in `find'

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Handling Exceptions for ThreadPoolExecutor

    - by HonorGod
    I have the following code snippet that basically scans through the list of task that needs to be executed and each task is then given to the executor for execution. The JobExecutor intern creates another executor (for doing db stuff...reading and writing data to queue) and completes the task. JobExecutor returns a Future for the tasks submitted. When one of the task fails, I want to gracefully interrupt all the threads and shutdown the executor by catching all the exceptions. What changes do I need to do? public class DataMovingClass { private static final AtomicInteger uniqueId = new AtomicInteger(0); private static final ThreadLocal<Integer> uniqueNumber = new IDGenerator(); ThreadPoolExecutor threadPoolExecutor = null ; private List<Source> sources = new ArrayList<Source>(); private static class IDGenerator extends ThreadLocal<Integer> { @Override public Integer get() { return uniqueId.incrementAndGet(); } } public void init(){ // load sources list } public boolean execute() { boolean succcess = true ; threadPoolExecutor = new ThreadPoolExecutor(10,10, 10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("DataMigration-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); List<Future<Boolean>> result = new ArrayList<Future<Boolean>>(); for (Source source : sources) { result.add(threadPoolExecutor.submit(new JobExecutor(source))); } for (Future<Boolean> jobDone : result) { try { if (!jobDone.get(100000, TimeUnit.SECONDS) && success) { // in case of successful DbWriterClass, we don't need to change // it. success = false; } } catch (Exception ex) { // handle exceptions } } } public class JobExecutor implements Callable<Boolean> { private ThreadPoolExecutor threadPoolExecutor ; Source jobSource ; public SourceJobExecutor(Source source) { this.jobSource = source; threadPoolExecutor = new ThreadPoolExecutor(10,10,10, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(1024), new ThreadFactory() { public Thread newThread(Runnable r) { Thread t = new Thread(r); t.setName("Job Executor-" + uniqueNumber.get()); return t; }// End method }, new ThreadPoolExecutor.CallerRunsPolicy()); } public Boolean call() throws Exception { boolean status = true ; System.out.println("Starting Job = " + jobSource.getName()); try { // do the specified task ; }catch (InterruptedException intrEx) { logger.warn("InterruptedException", intrEx); status = false ; } catch(Exception e) { logger.fatal("Exception occurred while executing task "+jobSource.getName(),e); status = false ; } System.out.println("Ending Job = " + jobSource.getName()); return status ; } } }

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • Unique ID Defined by Most-Derived Class accessible through Base Class

    - by Narfanator
    Okay, so, the idea is that I have a map of "components", which inherit from componentBase, and are keyed on an ID unique to the most-derived*. Only, I can't think of a good way to get this to work. I tried it with the constructor, but that doesn't work (Maybe I did it wrong). The problem with any virtual, etc, inheritance tricks are that the user has to impliment them at the bottom, which can be forgotten and makes it less... clean. *Right phrase? If - is inheritance; foo is most-derived: foo-foo1-foo2-componentBase Here's some code showing the problem, and why CRTP can't cut it: (No, it's not legit code, but I'm trying to get my thoughts down) #include<map> class componentBase { public: virtual static char idFunction() = 0; }; template <class T> class component : public virtual componentBase { public: static char idFunction(){ return reinterpret_cast<char>(&idFunction); } }; class intermediateDerivations1 : public virtual component<intermediateDerivations1> { }; class intermediateDerivations2 : public virtual component<intermediateDerivations2> { }; class derived1 : public intermediateDerivations1 { }; class derived2 : public intermediateDerivations1 { }; //How the unique ID gets used (more or less) std::map<char, componentBase*> TheMap; template<class T> void addToMap(componentBase * c) { TheMap[T::idFunction()] = c; } template<class T> T * getFromMap() { return TheMap[T::idFunction()]; } int main() { //In each case, the key needs to be different. //For these, the CRTP should do it: getFromMap<intermediateDerivations1>(); getFromMap<intermediateDerivations2>(); //But not for these. getFromMap<derived1>(); getFromMap<derived2>(); return 0; } More or less, I need something that is always there, no matter what the user does, and has a sortable value that's unique to the most-derived class. Also, I realize this isn't the best-asked question, I'm actually having some unexpected difficultly wrapping my head around it in words, so ask questions if/when you need clarification.

    Read the article

  • Django : presenting a form very different from the model and with multiple field values in a Django-

    - by sebpiq
    Hi ! I'm currently doing a firewall management application for Django, here's the (simplified) model : class Port(models.Model): number = models.PositiveIntegerField(primary_key=True) application = models.CharField(max_length=16, blank=True) class Rule(models.Model): port = models.ForeignKey(Port) ip_source = models.IPAddressField() ip_mask = models.IntegerField(validators=[MaxValueValidator(32)]) machine = models.ForeignKey("vmm.machine") What I would like to do, however, is to display to the user a form for entering rules, but with a very different organization than the model : Port 80 O Not open O Everywhere O Specific addresses : --------- delete field --------- delete field + add address field Port 443 ... etc Where Not open means that there is no rule for the given port, Everywhere means that there is only ONE rule (0.0.0.0/0) for the given port, and with specific addresses, you can add as many addresses as you want (I did this with JQuery), which will make as many rules. Now I did a version completely "handmade", meaning that I create the forms entirely in my templates, set input names with a prefix, and parse all the POSTed stuff in my view (which is quite painful, and means that there's no point in using a web framework). I also have a class which aggregates the rules together to easily pre-fill the forms with the informations "not open, everywhere, ...". I'm passing a list of those to the template, therefore it acts as an interface between my model and my "handmade" form : class MachinePort(object): def __init__(self, machine, port): self.machine = machine self.port = port @property def fully_open(self): for rule in self.port.rule_set.filter(machine=self.machine): if ipaddr.IPv4Network("%s/%s" % (rule.ip_source, rule.ip_mask)) == ipaddr.IPv4Network("0.0.0.0/0"): return True else : return False @property def partly_open(self): return bool(self.port.rule_set.filter(machine=self.machine)) and not self.fully_open @property def not_open(self): return not self.partly_open and not self.fully_open But all this is rather ugly ! Do anyone of you know if there is a classy way to do this ? In particular with the form... I don't know how to have a form that can have an undefined number of fields, neither how to transform these fields into Rule objects (because all the rule fields would have to be gathered from the form), neither how to save multiple objects... Well I could try to hack into the Form class, but seems like too much work for such a special case. Is there any nice feature I'm missing ?

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • Understanding PTS and DTS in video frames

    - by theateist
    I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) 11. 12. if (encodedPacket.pts != AV_NOPTS_VALUE) 13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base); 14. if (encodedPacket.dts != AV_NOPTS_VALUE) 15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base); 16. 17. av_interleaved_write_frame(outContainer, &encodedPacket) After reading many posts I still do not understand: outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why? When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why? What is the difference between time_base of outStream->codec and outStream? To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value? At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

    Read the article

  • Problem getting Java Streams in HP Tandem (Non-Stop)

    - by AndreaG
    Hi. We are porting a simple Java application between Tandem NonStop systems, from G-Series to H-Series. Java version is 1.5.0_02. When performing basic I/O tasks like getting output stream from or opening a client socket, we receive exceptions like java.io.IOException: Value out of range or java.net.SocketException: Value out of range ("value out of range" is Tandem native jargon for, well, quite everything I suppose). Has anybody got similar issues? i.e. I/O corruption while for example messing with JNI? I suppose there is something wrong with the system, but where might it be? Thank you. EDIT: adding snippets as requested sample snippet (a) - using Runtime.exec () (adapted) Properties envVars = new Properties(); Process p = r.exec("/bin/env"); envVars.load(p.getInputStream()); Stack trace (a): java.io.IOException: Value out of range (errno:4034) at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:194) at java.lang.UNIXProcess$DeferredCloseInputStream.read(UNIXProcess.java:221) at java.io.BufferedInputStream.read1(BufferedInputStream.java:254) at java.io.BufferedInputStream.read(BufferedInputStream.java:313) at sun.nio.cs.StreamDecoder$CharsetSD.readBytes(StreamDecoder.java:411) at sun.nio.cs.StreamDecoder$CharsetSD.implRead(StreamDecoder.java:453) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:183) at java.io.InputStreamReader.read(InputStreamReader.java:167) at java.io.BufferedReader.fill(BufferedReader.java:136) at java.io.BufferedReader.readLine(BufferedReader.java:299) at java.io.BufferedReader.readLine(BufferedReader.java:362) at util.Environment.getVariables(Environment.java:39) Last line fails, and output gets redirected to console (!). sample snippet (b) - using HttpURLConnection: public WorkerThread (HttpURLConnection conn, String requestData, Logger logger) { this.conn = conn; ... } public void run () { OutputStream out = conn.getOutputStream (); } Stack trace (b): java.net.SocketException: Value out of range (errno:4034) at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at java.net.Socket.connect(Socket.java:507) at sun.net.NetworkClient.doConnect(NetworkClient.java:155) at sun.net.www.http.HttpClient.openServer(HttpClient.java:365) at sun.net.www.http.HttpClient.openServer(HttpClient.java:477) at sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:280) at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:337) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:176) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:736) at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:162) at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:828) at sun.net.www.protocol.https.HttpsURLConnectionImpl.getOutputStream(HttpsURLConnectionImpl.java:230) Case (a) can be avoided because it was a workaround for other issues with previous JRE version (!), but same behaviour with sockets is really nasty.

    Read the article

  • How to remove lowercase sentence fragments from text?

    - by Aaron
    Hello: I'm tyring to remove lowercase sentence fragments from standard text files using regular expresions or a simple Perl oneliner. These are commonly referred to as speech or attribution tags, for example - he said, she said, etc. This example shows before and after using manual deletion: Original: "Ah, that's perfectly true!" exclaimed Alyosha. "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" cried the girl by the window, suddenly turning to her father with a disdainful and contemptuous air. "Wait a little, Varvara!" cried her father, speaking peremptorily but looking at them quite approvingly. "That's her character," he said, addressing Alyosha again. "Where have you been?" he asked him. "I think," he said, "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," said he. All lower case sentence fragments manually removed: "Ah, that's perfectly true!" "Oh, do leave off playing the fool! Some idiot comes in, and you put us to shame!" "Wait a little, Varvara!" "That's her character," "Where have you been?" "I think," "I've forgotten something... my handkerchief, I think.... Well, even if I've not forgotten anything, let me stay a little." He sat down. Father stood over him. "You sit down, too," I've changed straight quotes " to balanced and tried: ” (...)+[.] Of course, this removes some fragments but deletes some text in balanced quotes and text starting with uppercase letters. [^A-Z] didn't work in the above expression. I realize that it may be impossible to achieve 100% accuracy but any useful expression, perl, or python script would be deeply appreciated. Cheers, Aaron

    Read the article

  • WCF Self Host Service - Endpoints in C#

    - by Kyle
    My first few attempts at creating a self hosted service. Trying to make something up which will accept a query string and return some text but have have a few issues: All the documentation talks about endpoints being created automatically for each base address if they are not found in a config file. This doesn't seem to be the case for me, I get the "Service has zero application endpoints..." exception. Manually specifying a base endpoint as below seems to resolve this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ServiceModel; using System.ServiceModel.Description; namespace TestService { [ServiceContract] public interface IHelloWorldService { [OperationContract] string SayHello(string name); } public class HelloWorldService : IHelloWorldService { public string SayHello(string name) { return string.Format("Hello, {0}", name); } } class Program { static void Main(string[] args) { string baseaddr = "http://localhost:8080/HelloWorldService/"; Uri baseAddress = new Uri(baseaddr); // Create the ServiceHost. using (ServiceHost host = new ServiceHost(typeof(HelloWorldService), baseAddress)) { // Enable metadata publishing. ServiceMetadataBehavior smb = new ServiceMetadataBehavior(); smb.HttpGetEnabled = true; smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15; host.Description.Behaviors.Add(smb); host.AddServiceEndpoint(typeof(IHelloWorldService), new BasicHttpBinding(), baseaddr + "SayHello"); //for some reason a default endpoint does not get created here host.Open(); Console.WriteLine("The service is ready at {0}", baseAddress); Console.WriteLine("Press to stop the service."); Console.ReadLine(); // Close the ServiceHost. host.Close(); } } } } I still think I'm doing something wrong as I don't get the normal "This is a web service...etc..." page when I load up the url How would I go about setting this up to return the value of name in SayHello(string name) when requested thusly: localhost:8080/HelloWorldService/SayHello?name=kyle Do I have to create an endpoing for the SayHello contract as well? I'm trying to walk before running, but this just seems like crawling...Service has zero application endpoints...

    Read the article

  • How to integrate Purify into Hudson CI?

    - by Martin
    Hello everybody! I have a Hudson CI system set up and for the moment it is used for building a project and running some unit tests. My next step is to integrate the memory leak detector Purify into the build cycle. Now I want to start the unit tests also inside purify and for this I have created a new batch task which runs following command: purify.exe /SaveTextData MyExecutable.exe --test TestLibrary.dll --output xml As I read in the Purify documentation the /SaveTextData option is used in order to run purify not in GUI mode. If I run this command on my local workstation in the command line it works perfectly. But in case it is started by Hudson, nothing happens. Unfortunetly there are no logs of purify... Has someone ever tried to start purify either by Hudson or any other CI system? Thanks in advance. Best regards Martin EDIT: I forgot to tell you, that I have Hudson running as master and slave on different computers. On the master I have configured a task which should start the unit tests within purify on the slave. I am running the slave via JNLP. EDIT 18.03.2010: Ok, so finally I am a bit closer the source of the problem. I have discovered, that running my unit tests in purify locally the log file EngineCmdLine.log contains three commands. I am starting purify with following command: purify.exe /SaveTextData TestRunnerConsoleWD.exe --test TestDemoWD.dll Output of EngineCmdLine.log when starting purify manually: File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestRunnerConsoleWD.exe File: C:\WINDOWS\system32\ws2_32.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll Output when starting via Hudson: File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestRunnerConsoleWD.exe File: C:\WINDOWS\system32\ws2_32.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll File: D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll The error output of purify: Instrumenting: BtcTestDemoWD.dll 313856 bytes Purify: While processing file D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TESTFWWD.DLL: Error: Cannot replace file c:\Programme\IBM\RationalPurifyPlus\PurifyPlus\cache\BTCTESTFWWD$Purify_D_workspace_hudson_workspace_Purify_TestFW_CommonsCoreTest_Cpp_msvs9.DLL. Is it in use? TESTFWWD.DLL 505344 bytes Unable to instrument D:\workspace\hudson\workspace\Purify_TestFW_CommonsCoreTest_Cpp_msvs9\TestDemoWD.dll (0x1) Question is, why is purify starting twice a command with the TestDemoWD.dll library?

    Read the article

  • compiling opencv 2.4 on a 64 bit mac in Xcode

    - by Walt
    I have an opencv project that I've been developing under ubuntu 12.04, on a parellels VM on a mac which has an x86_64 architecture. There have been many screen switching performance issues that I believe are due to the VM, where linux video modes flip around for a couple seconds while camera access is made by the opencv application. I decided to moved the project into Xcode on the mac side of the computer to continue the opencv development. However, I'm not that familiar with xcode and am having trouble getting the project to build correctly there. I have xcode installed. I downloaded and decompressed the latest version of opencv on the mac, and ran: ~/src/opencv/build/cmake-gui -G Xcode .. per the instructions from willowgarage and various other locations. This appeared to work fine (however I'm wondering now if I'm missing an architecture setting in here, although it is 64-bit intel in Xcode). I then setup an xcode project with the source files from the linux project and changed the include directories to use /opt/local/include/... rather than the /usr/local/include/... I switched xcode to use the LLVM GCC compiler in the build settings for the project then set the Apple LLVM Dialog for C++ to Language Dialect to GNU++11 (which seems possibly inconsistant with the line above) I'm not using a makefile in xcode, (that I'm aware of - it has its own project file...) I was also running into a linker issue that looked like they may be resolved with the addition of this linker flag: -lopencv_video based on a similar posting here: other thread however in that case the person was using a Makefile in their project. I've tried adding this linker flag under "Other Linker Flags" in xcode build settings but still get build errors. I think I may have two issues here, one with the architecture settings when building the opencv libraries with Cmake, and one with the linker flag settings in my project. Currently the build error list looks like this: Undefined symbols for architecture x86_64: "cv::_InputArray::_InputArray(cv::Mat const&)", referenced from: _main in main.o "cv::_OutputArray::_OutputArray(cv::Mat&)", referenced from: _main in main.o "cv::Mat::deallocate()", referenced from: cv::Mat::release() in main.o "cv::Mat::copySize(cv::Mat const&)", referenced from: cv::Mat::Mat(cv::Mat const&)in main.o cv::Mat::operator=(cv::Mat const&)in main.o "cv::Mat::Mat(_IplImage const*, bool)", referenced from: _main in main.o "cv::imread(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, int)", referenced from: _main in main.o ---SNIP--- ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status Can anyone provide some guidance on what to try next? Thanks, Walt

    Read the article

  • Drawing a texture with an alpha channel doesn't work -- draws black

    - by DevDevDev
    I am modifying GLPaint to use a different background, so in this case it is white. Anyway the existing stamp they are using assumes the background is black, so I made a new background with an alpha channel. When I draw on the canvas it is still black, what gives? When I actually draw, I just bind the texture and it works. Something is wrong in this initialization. Here is the photo - (id)initWithCoder:(NSCoder*)coder { CGImageRef brushImage; CGContextRef brushContext; GLubyte *brushData; size_t width, height; if (self = [super initWithCoder:coder]) { CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer; eaglLayer.opaque = YES; // In this application, we want to retain the EAGLDrawable contents after a call to presentRenderbuffer. eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES1]; if (!context || ![EAGLContext setCurrentContext:context]) { [self release]; return nil; } // Create a texture from an image // First create a UIImage object from the data in a image file, and then extract the Core Graphics image brushImage = [UIImage imageNamed:@"test.png"].CGImage; // Get the width and height of the image width = CGImageGetWidth(brushImage); height = CGImageGetHeight(brushImage); // Texture dimensions must be a power of 2. If you write an application that allows users to supply an image, // you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2. // Make sure the image exists if(brushImage) { brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte)); brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage); CGContextRelease(brushContext); glGenTextures(1, &brushTexture); glBindTexture(GL_TEXTURE_2D, brushTexture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData); free(brushData); } //Set up OpenGL states glMatrixMode(GL_PROJECTION); CGRect frame = self.bounds; glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1); glViewport(0, 0, frame.size.width, frame.size.height); glMatrixMode(GL_MODELVIEW); glDisable(GL_DITHER); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA); glEnable(GL_POINT_SPRITE_OES); glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE); glPointSize(width / kBrushScale); } return self; }

    Read the article

  • zend framework 2 autentification using DbTable failure

    - by josepmra
    I have followed the zend instructions for implement my web Authentication using a database table. It's exactly the same code, but when render the page, the following exceptions appears: Zend\Authentication\Adapter\Exception\RuntimeException File: C:\xampp\htdocs\pfc\vendor\ZF2\library\Zend\Authentication\Adapter\DbTable.php Mensaje: The supplied parameters to DbTable failed to produce a valid sql statement, please check table and column names for validity. produced by this other: Zend\Db\Adapter\Exception\InvalidQueryException File: C:\xampp\htdocs\pfc\vendor\ZF2\library\Zend\Db\Adapter\Driver\Mysqli\Statement.php Mensaje: Statement couldn't be produced with sql: SELECT `users`.*, (CASE WHEN `password` = ? THEN 1 ELSE 0 END) AS `zend_auth_credential_match` FROM `users` WHERE `mail` = ? Seems to be that Statement.php can not execute the sql of above, but I send the sql by phpmyadmin replacing the ? for strings and work ok. I am sure that $dbAdapter works ok also because I have tested it and the columns name are "mail" and "password". This in my code, also I put the $dbAdapter test code. $dbAdapter = new DbAdapter(array( //This DbAdapter Work ok sure!! 'driver' => 'Mysqli', 'database' => 'securedraw', 'username' => 'root', 'password' => '' )); $fp = function($name) use ($dbAdapter) { return $dbAdapter->driver->formatParameterName($name);}; $sql = 'SELECT * FROM ' . $qi('users') . ' WHERE id = ' . $fp('id'); $statement = $dbAdapter->query($sql); $parameters = array('id' => 1); $sqlResult = $statement->execute($parameters); $row = $sqlResult->current(); $mail = $row['mail']; $password = $row['password']; //until here test $dbAdapter exitly!! //Start the auth proccess!! $authAdapter = new AuthDbTableAdapter($dbAdapter); $authAdapter->setTableName('users') ->setIdentityColumn('mail') ->setCredentialColumn('password'); $authAdapter->setIdentity('josep') ->setCredential('josep'); $authResult = $authAdapter->authenticate(); //This is the fail method!!!

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Reusing xalan transformer causing its extension functions break

    - by Leslie Norman
    I am using xalan 2.7.1 to validate my xml docs with xslt style sheet. It works fine for the first document and returns error message in case of error along with correct line and column number of xml source by making use of NodeInfo.lineNumber and NodeInfo.columnNumber extensions. The problem is when I try to reuse transformer to validate other xml docs, it successfully transforms the document but always returns lineNumber=columnNumber=-1 for all errors. Any idea? Here is my code: TransformerFactory tFactory = TransformerFactory.newInstance("org.apache.xalan.processor.TransformerFactoryImpl", null); tFactory.setAttribute(TransformerFactoryImpl.FEATURE_SOURCE_LOCATION, Boolean.TRUE); StreamSource xsltStreamSource = new StreamSource(new File("E:\\Temp\\Test\\myXslt.xsl")); Transformer transformer=null; try { transformer = tFactory.newTransformer(xsltStreamSource); ByteArrayOutputStream outStream = new ByteArrayOutputStream(); File srcFolder = new File("E:\\Temp\\Test"); for (File file :srcFolder.listFiles()) { if (file.getName().endsWith("xml")) { transformer.transform(new StreamSource(file), new StreamResult(outStream)); transformer.reset(); } } System.out.println(outStream.toString()); } catch (TransformerException e) { e.printStackTrace(); } Edit: New code after implementing @rsp suggestions: package mycompany; import java.io.File; import javax.xml.transform.ErrorListener; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerFactory; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.apache.xalan.processor.TransformerFactoryImpl; public class XsltTransformer { public static void main(String[] args) { TransformerFactory tFactory = TransformerFactory.newInstance("org.apache.xalan.processor.TransformerFactoryImpl", null); tFactory.setAttribute(TransformerFactoryImpl.FEATURE_SOURCE_LOCATION, Boolean.TRUE); StreamSource xsltStreamSource = new StreamSource(new File("E:\\Temp\\Test\\myXslt.xsl")); try { Transformer transformer = tFactory.newTransformer(xsltStreamSource); File srcFolder = new File("E:\\Temp\\Test"); for (File file : srcFolder.listFiles()) { if (file.getName().endsWith("xml")) { Source source = new StreamSource(file); StreamResult result = new StreamResult(System.out); XsltTransformer xsltTransformer = new XsltTransformer(); ErrorListenerImpl errorHandler = xsltTransformer.new ErrorListenerImpl(); transformer.setErrorListener(errorHandler); transformer.transform(source, result); if (errorHandler.e != null) { System.out.println("Transformation Exception: " + errorHandler.e.getMessage()); } transformer.reset(); } } } catch (TransformerException e) { e.printStackTrace(); } } private class ErrorListenerImpl implements ErrorListener { public TransformerException e = null; public void error(TransformerException exception) { this.e = exception; } public void fatalError(TransformerException exception) { this.e = exception; } public void warning(TransformerException exception) { this.e = exception; } } }

    Read the article

  • Suggestions for designing large-scale Java webapp from the group up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • Ultimate Get/Post with Android Thread for Dummies

    - by Jayomat
    Hi, I'm writing an app to check for the bus timetable's. Therefor I need to post some data to a html page, submit it, and parse the resulting page with htmlparser. Though it may be asked a lot, can some one help me identify if 1) this page does support post/get (I think it does) 2) which fields I need to use? 3) How to make the actual request? this is my code so far: String url = "http://busspur02.aseag.de/bs.exe?Cmd=RV&Karten=true&DatumT=30&DatumM=4&DatumJ=2010&ZeitH=&ZeitM=&Suchen=%28S%29uchen&GT0=&HT0=&GT1=&HT1="; String charset = "CP1252"; System.out.println("startFrom: "+start_from); System.out.println("goTo: "+destination); //String tag.v List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("HTO", start_from)); params.add(new BasicNameValuePair("HT1", destination)); params.add(new BasicNameValuePair("GTO", "Aachen")); params.add(new BasicNameValuePair("GT1", "Aachen")); params.add(new BasicNameValuePair("DatumT", day)); params.add(new BasicNameValuePair("DatumM", month)); params.add(new BasicNameValuePair("DatumJ", year)); params.add(new BasicNameValuePair("ZeitH", hour)); params.add(new BasicNameValuePair("ZeitM", min)); UrlEncodedFormEntity query = new UrlEncodedFormEntity(params, charset); HttpPost post = new HttpPost(url); post.setEntity(query); InputStream response = new DefaultHttpClient().execute(post).getEntity().getContent(); // Now do your thing with the facebook response. String source = readText(response,"CP1252"); Log.d(TAG_AVV,response.toString()); System.out.println("STREAM "+source); One person also gave me a hint to use firebug to read what's going on at the page, but I don't really understand what to look for, or more precisely, how to use the provided information. I also find it confusing, for example, that when I enter the data by hand, the url says, for example, "....HTO=Kaiserplatz&...", but in Firebug, the same Kaiserplatz is connected to a different field, in this case: \<\td class="Start3" Kaiserplatz <\/td (I inserted \ to make it visible) The last line in my code prints the html page, but without having send a request.. it's printed as if there was no input at all... My app is almost done, I hope someone can help me out to finish it! thanks in advance

    Read the article

  • Evaluation of CTEs in SQL Server 2005

    - by Jammer
    I have a question about how MS SQL evaluates functions inside CTEs. A couple of searches didn't turn up any results related to this issue, but I apologize if this is common knowledge and I'm just behind the curve. It wouldn't be the first time :-) This query is a simplified (and obviously less dynamic) version of what I'm actually doing, but it does exhibit the problem I'm experiencing. It looks like this: CREATE TABLE #EmployeePool(EmployeeID int, EmployeeRank int); INSERT INTO #EmployeePool(EmployeeID, EmployeeRank) SELECT 42, 1 UNION ALL SELECT 43, 2; DECLARE @NumEmployees int; SELECT @NumEmployees = COUNT(*) FROM #EmployeePool; WITH RandomizedCustomers AS ( SELECT CAST(c.Criteria AS int) AS CustomerID, dbo.fnUtil_Random(@NumEmployees) AS RandomRank FROM dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') c) SELECT rc.CustomerID, ep.EmployeeID FROM RandomizedCustomers rc JOIN #EmployeePool ep ON ep.EmployeeRank = rc.RandomRank; DROP TABLE #EmployeePool; The following can be assumed about all executions of the above: The result of dbo.fnUtil_Random() is always an int value greater than zero and less than or equal to the argument passed in. Since it's being called above with @NumEmployees which has the value 2, this function always evaluates to 1 or 2. The result of dbo.fnUtil_ParseCriteria(@CustomerIDs, 'int') produces a one-column, one-row table that contains a sql_variant with a base type of 'int' that has the value 219935. Given the above assumptions, it makes sense (to me, anyway) that the result of the expression above should always produce a two-column table containing one record - CustomerID and an EmployeeID. The CustomerID should always be the int value 219935, and the EmployeeID should be either 42 or 43. However, this is not always the case. Sometimes I get the expected single record. Other times I get two records (one for each EmployeeID), and still others I get no records. However, if I replace the RandomizedCustomers CTE with a true temp table, the problem vanishes completely. Every time I think I have an explanation for this behavior, it turns out to not make sense or be impossible, so I literally cannot explain why this would happen. Since the problem does not happen when I replace the CTE with a temp table, I can only assume it has something to do with the functions inside CTEs are evaluated during joins to that CTE. Do any of you have any theories?

    Read the article

< Previous Page | 824 825 826 827 828 829 830 831 832 833 834 835  | Next Page >