Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 418/448 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • PyGTK: dynamic label wrapping

    - by detly
    It's a known bug/issue that a label in GTK will not dynamically resize when the parent changes. It's one of those really annoying small details, and I want to hack around it if possible. I followed the approach at 16 software, but as per the disclaimer you cannot then resize it smaller. So I attempted a trick mentioned in one of the comments (the set_size_request call in the signal callback), but this results in some sort of infinite loop (try it and see). Does anyone have any other ideas? (You can't block the signal just for the duration of the call, since as the print statements seem to indicate, the problem starts after the function is left.) The code is below. You can see what I mean if you run it and try to resize the window larger and then smaller. (If you want to see the original problem, comment out the line after "Connect to the size-allocate signal", run it, and resize the window bigger.) The Glade file ("example.glade"): <?xml version="1.0"?> <glade-interface> <!-- interface-requires gtk+ 2.16 --> <!-- interface-naming-policy project-wide --> <widget class="GtkWindow" id="window1"> <property name="visible">True</property> <signal name="destroy" handler="on_destroy"/> <child> <widget class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="label" translatable="yes">In publishing and graphic design, lorem ipsum[p][1][2] is the name given to commonly used placeholder text (filler text) to demonstrate the graphic elements of a document or visual presentation, such as font, typography, and layout. The lorem ipsum text, which is typically a nonsensical list of semi-Latin words, is a hacked version of a Latin text by Cicero, with words/letters omitted and others inserted, but not proper Latin[1][2] (see below: History and discovery). The closest English translation would be "pain itself" (dolorem = pain, grief, misery, suffering; ipsum = itself).</property> <property name="wrap">True</property> </widget> </child> </widget> </glade-interface> The Python code: #!/usr/bin/python import pygtk import gobject import gtk.glade def wrapped_label_hack(gtklabel, allocation): print "In wrapped_label_hack" gtklabel.set_size_request(allocation.width, -1) # If you uncomment this, we get INFINITE LOOPING! # gtklabel.set_size_request(-1, -1) print "Leaving wrapped_label_hack" class ExampleGTK: def __init__(self, filename): self.tree = gtk.glade.XML(filename, "window1", "Example") self.id = "window1" self.tree.signal_autoconnect(self) # Connect to the size-allocate signal self.get_widget("label1").connect("size-allocate", wrapped_label_hack) def on_destroy(self, widget): self.close() def get_widget(self, id): return self.tree.get_widget(id) def close(self): window = self.get_widget(self.id) if window is not None: window.destroy() gtk.main_quit() if __name__ == "__main__": window = ExampleGTK("example.glade") gtk.main()

    Read the article

  • Scipy Negative Distance? What?

    - by disappearedng
    I have a input file which are all floating point numbers to 4 decimal place. i.e. 13359 0.0000 0.0000 0.0001 0.0001 0.0002` 0.0003 0.0007 ... (the first is the id). My class uses the loadVectorsFromFile method which multiplies it by 10000 and then int() these numbers. On top of that, I also loop through each vector to ensure that there are no negative values inside. However, when I perform _hclustering, I am continually seeing the error, "Linkage Z contains negative values". I seriously think this is a bug because: I checked my values, the values are no where small enough or big enough to approach the limits of the floating point numbers and the formula that I used to derive the values in the file uses absolute value (my input is DEFINITELY right). Can someone enligten me as to why I am seeing this weird error? What is going on that is causing this negative distance error? ===== def loadVectorsFromFile(self, limit, loc, assertAllPositive=True, inflate=True): """Inflate to prevent "negative" distance, we use 4 decimal points, so *10000 """ vectors = {} self.winfo("Each vector is set to have %d limit in length" % limit) with open( loc ) as inf: for line in filter(None, inf.read().split('\n')): l = line.split('\t') if limit: scores = map(float, l[1:limit+1]) else: scores = map(float, l[1:]) if inflate: vectors[ l[0]] = map( lambda x: int(x*10000), scores) #int might save space else: vectors[ l[0]] = scores if assertAllPositive: #Assert that it has no negative value for dirID, l in vectors.iteritems(): if reduce(operator.or_, map( lambda x: x < 0, l)): self.werror( "Vector %s has negative values!" % dirID) return vectors def main( self, inputDir, outputDir, limit=0, inFname="data.vectors.all", mappingFname='all.id.features.group.intermediate'): """ Loads vector from a file and start clustering INPUT vectors is { featureID: tfidfVector (list), } """ IDFeatureDic = loadIdFeatureGroupDicFromIntermediate( pjoin(self.configDir, mappingFname)) if not os.path.exists(outputDir): os.makedirs(outputDir) vectors = self.loadVectorsFromFile( limit, pjoin( inputDir, inFname)) for threshold in map( lambda x:float(x)/30, range(20,30)): clusters = self._hclustering(threshold, vectors) if clusters: outputLoc = pjoin(outputDir, "threshold.%s.result" % str(threshold)) with open(outputLoc, 'w') as outf: for clusterNo, cluster in clusters.iteritems(): outf.write('%s\n' % str(clusterNo)) for featureID in cluster: feature, group = IDFeatureDic[featureID] outline = "%s\t%s\n" % (feature, group) outf.write(outline.encode('utf-8')) outf.write("\n") else: continue def _hclustering(self, threshold, vectors): """function which you should call to vary the threshold vectors: { featureID: [ tfidf scores, tfidf score, .. ] """ clusters = defaultdict(list) if len(vectors) > 1: try: results = hierarchy.fclusterdata( vectors.values(), threshold, metric='cosine') except ValueError, e: self.werror("_hclustering: %s" % str(e)) return False for i, featureID in enumerate( vectors.keys()):

    Read the article

  • Why does UIWebKit throw a structuralComplexityContribution exception?

    - by Axeva
    I've got a simple UIWebView in my iPhone app that's loading a XHTML document with some SGV embeded. This all works find on the desktop version of Safari, but it crashes in a UIWebView. Here is the Objective C: NSString *path = [[NSBundle mainBundle] pathForResource:@"test" ofType:@"html"]; NSData *fileData = [NSData dataWithContentsOfFile: path]; [svgView loadData: fileData MIMEType: @"text/xml" textEncodingName: @"UTF-8" baseURL: [NSURL fileURLWithPath: path]]; I also tried a MIMEType of application/xhtml+xml, but it didn't help. Here is the HTML: <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>XTech SVG Demo</title> </head> <body> <svg xmlns="http://www.w3.org/2000/svg"> <g style="fill-opacity:0.7;"> <circle cx="6.5cm" cy="2cm" r="100" style="fill:red; stroke:black; stroke-width:0.1cm" transform="translate(0,50)" /> <circle cx="6.5cm" cy="2cm" r="100" style="fill:blue; stroke:black; stroke-width:0.1cm" transform="translate(70,150)" /> <circle cx="6.5cm" cy="2cm" r="100" style="fill:green; stroke:black; stroke-width:0.1cm" transform="translate(-70,150)"/> </g> </svg> </body> </html> All very basic stuff. When it loads on the iPhone, however, it crashes with this error: 2010-03-31 10:37:10.252 ColorDoodle[2014:20b] -[DOMElement structuralComplexityContribution]: unrecognized selector sent to instance 0x3e51b60 2010-03-31 10:37:10.253 ColorDoodle[2014:20b] Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: ' -[DOMElement structuralComplexityContribution]: unrecognized selector sent to instance 0x3e51b60' Any idea why? Is this a bug in the rendering engine of the UIWebView? I don't see anything too odd here. * Updated * There is definitely something screwy going on here. If I add this bit of code just inside the tag, it works fine: <form> </form> Take that code back out, and it crashes again.

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Compile classfile issue in Spring 3

    - by Prajith R
    I have used spring framework 3 for my application. Everything is ok while developed in Netbeans But i need a custom build and done for the same The build created without any issue, but i got the following error The error occurred while calling the following method @RequestMapping(value = "/security/login", method = RequestMethod.POST) public ModelAndView login(@RequestParam String userName, @RequestParam String password, HttpServletRequest request) { ...................... But There is no issue while creating the war with netbeans (I am sure it is about the compilation issue) have you any experiance on this issue ... There is any additional javac argument for compile the same (netbeans used there custom task for the compilation) type Exception report message description The server encountered an internal error () that prevented it from fulfilling this request. exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:659) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause org.springframework.web.bind.annotation.support.HandlerMethodInvocationException: Failed to invoke handler method [public org.springframework.web.servlet.ModelAndView com.mypackage.security.controller.LoginController.login(java.lang.String,java.lang.String,javax.servlet.http.HttpServletRequest)]; nested exception is java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) root cause java.lang.IllegalStateException: No parameter name specified for argument of type [java.lang.String], and no parameter name information found in class file either. org.springframework.web.bind.annotation.support.HandlerMethodInvoker.getRequiredParameterName(HandlerMethodInvoker.java:618) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveRequestParam(HandlerMethodInvoker.java:417) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:277) org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:163) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:414) org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:402) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:771) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:716) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:647) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:563) javax.servlet.http.HttpServlet.service(HttpServlet.java:637) javax.servlet.http.HttpServlet.service(HttpServlet.java:717) com.mypackage.security.controller.AuthFilter.doFilter(Unknown Source) org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) note The full stack trace of the root cause is available in the Apache Tomcat/6.0.18 logs.

    Read the article

  • Suggest the best options to me to design the dynamic web interface using PHP MYSQL and AJAX

    - by Krishna
    Hello, I am designing a web interface for a company. I am describing the company's profile: company is currently having 5 branches and planning to extend their branches all over the country. it is an insurance surveying company. they are dealing with 6 Categories in the insurance domain, vide .. Engineering Fire Marine Motor Miscellaneous Risk Inspection and branches named as b1, b2, b3, b4, b5 and Extending. and finally they have contract with 22 companies. For each claim they are assign a unique ID. like contractcompany/category/serialno Ex: take a contracted company names as xxx, sss, zzz. xxx/Engineering/001 sss/Engineering/001 . . . xxx/Enginnering/002 sss/Engineering/002 . . . xxx/Fire/001 sss/Fire/001 . . . xxx/Fire/002 . . . xxx/Fire/002 . . . and so on..... by this way they issue the unique ID for each claim. Finally what i want is developing the interface with PHP mysql and ajax auto generating the unique id for each claim. store full details of the claims with reference to unique id. show all claims in one page, and they can view by branch wise and category wise. send monthly Report (All claims they have given and status of claims) to contract companies. give access to contracted companies, but they can view only their respective claims. Each claim has its own documents. So they can be uploaded by own company users or administrator. these files are associated with unique ID. contracted companies can view files. Give access to branches to enter new claims and update old claims. Administrator can create, update and delete all the claims and their details. Only administrator can grant new users (own company branches / contracted companies) Finally the the panel is completely database driven. Could any body can help. Thanks in advance Kindly do the needful and oblige Thanks and Regards Krishna. P [email protected]

    Read the article

  • jQuery "Autocomplete" plugin is messing up the order of my data

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete plugin on a couple of different pages. In both instances, the order of displayed strings is a little bit messed up. Example 1: array of strings: basically they are in alphabetical order except for General Knowledge which has been pushed to the top: General Knowledge,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: General Knowledge,Geography,Art and Design,Business Studies,Citizenship,Design and Technology,English,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Note that Geography has been pushed to be the second item, after General Knowledge. The rest are all fine. Example 2: array of strings: as above but with Cross-curricular instead of General Knowledge. Cross-curricular,Art and Design,Business Studies,Citizenship,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Displayed strings: Cross-curricular,Citizenship,Art and Design,Business Studies,Design and Technology,English,Geography,History,ICT,Mathematics,MFL French,MFL German,MFL Spanish,Music,Physical Education,PSHE,Religious Education,Science,Something Else Here, Citizenship has been pushed to the number 2 position. I've experimented a little, and it seems like there's a bug saying "put things that start with the same letter as the first item after the first item and leave the rest alone". Kind of mystifying. I've tried a bit of debugging by triggering alerts inside the autocomplete plugin code but everywhere i can see, it's using the correct order. it seems to be just when its rendered out that it goes wrong. Any ideas anyone? max EDIT - reply to Clint Thanks for pointing me at the relevant bit of code btw. To make diagnosis simpler i changed the array of values to ["carrot", "apple", "cherry"], which autocomplete re-orders to ["carrot", "cherry", "apple"]. Here's the array that it generates for stMatchSets: stMatchSets = ({'':[#1={value:"carrot", data:["carrot"], result:"carrot"}, #3={value:"apple", data:["apple"], result:"apple"}, #2={value:"cherry", data:["cherry"], result:"cherry"}], c:[#1#, #2#], a:[#3#]}) So, it's collecting the first letters together into a map, which makes sense as a first-pass matching strategy. What i'd like it to do though, is to use the given array of values, rather than the map, when it comes to populating the displayed list. I can't quite get my head around what's going on with the cache inside the guts of the code (i'm not very experienced with javascript).

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • Are "EXC_BREAKPOINT (SIGTRAP)" exceptions caused by debugging breakpoints?

    - by Dennis
    I have a multithreaded app that is very stable on all my test machines and seems to be stable for almost every one of my users (based on no complaints of crashes). The app crashes frequently for one user, though, who was kind enough to send crash reports. All the crash reports (~10 consecutive reports) look essentially identical: Date/Time: 2010-04-06 11:44:56.106 -0700 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 com.apple.CoreFoundation 0x90ab98d4 __CFBasicHashRehash + 3348 1 com.apple.CoreFoundation 0x90adf610 CFBasicHashRemoveValue + 1264 2 com.apple.CoreText 0x94e0069c TCFMutableSet::Intersect(__CFSet const*) const + 126 3 com.apple.CoreText 0x94dfe465 TDescriptorSource::CopyMandatoryMatchableRequest(__CFDictionary const*, __CFSet const*) + 115 4 com.apple.CoreText 0x94dfdda6 TDescriptorSource::CopyDescriptorsForRequest(__CFDictionary const*, __CFSet const*, long (*)(void const*, void const*, void*), void*, unsigned long) const + 40 5 com.apple.CoreText 0x94e00377 TDescriptor::CreateMatchingDescriptors(__CFSet const*, unsigned long) const + 135 6 com.apple.AppKit 0x961f5952 __NSFontFactoryWithName + 904 7 com.apple.AppKit 0x961f54f0 +[NSFont fontWithName:size:] + 39 (....more text follows) First, I spent a long time investigating [NSFont fontWithName:size:]. I figured that maybe the user's fonts were screwed up somehow, so that [NSFont fontWithName:size:] was requesting something non-existent and failing for that reason. I added a bunch of code using [[NSFontManager sharedFontManager] availableFontNamesWithTraits:NSItalicFontMask] to check for font availability in advance. Sadly, these changes didn't fix the problem. I've now noticed that I forgot to remove some debugging breakpoints, including _NSLockError, [NSException raise], and objc_exception_throw. However, the app was definitely built using "Release" as the active build configuration. I assume that using the "Release" configuration prevents setting of any breakpoints--but then again I am not sure exactly how breakpoints work or whether the program needs to be run from within gdb for breakpoints to have any effect. My questions are: could my having left the breakpoints set be the cause of the crashes observed by the user? If so, why would the breakpoints cause a problem only for this one user? If not, has anybody else had similar problems with [NSFont fontWithName:size:]? I will probably just try removing the breakpoints and sending back to the user, but I'm not sure how much currency I have left with that user. And I'd like to understand more generally whether leaving the breakpoints set could possibly cause a problem (when the app is built using "Release" configuration).

    Read the article

  • SQLSTATE[HY093]: Invalid parameter number: mixed named and positional parameters

    - by Gremo
    Weird error and this is driving me crazy all the day. To me it seems a bug because there are no positional parameters in my query. Here is the method: public function getAll(User $user, DateTime $start = null, DateTime $end = null) { $params = array('user_id' => $user->getId()); $rsm = new \Doctrine\ORM\Query\ResultSetMapping(); // Result set mapping $rsm->addScalarResult('subtype', 'subtype'); $rsm->addScalarResult('count', 'count'); $sms_sql = "SELECT CONCAT('sms_', IF(is_auto = 0, 'user' , 'auto')) AS subtype, " . "SUM(messages_count * (customers_count + recipients_count)) AS count " . "FROM outgoing_message AS m INNER JOIN small_text_message AS s ON " . "m.id = s.id WHERE status <> 'pending' AND user_id = :user_id"; $news_sql = "SELECT CONCAT('news_', IF(is_auto = 0, 'user' , 'auto')) AS subtype, " . "SUM(customers_count + recipients_count) AS count " . "FROM outgoing_message AS m JOIN newsletter AS n ON m.id = n.id " . "WHERE status <> 'pending' AND user_id = :user_id"; if($start) : $sms_sql .= " AND sent_at >= :start"; $news_sql .= " AND sent_at >= :start"; $params['start'] = $start->format('Y-m-d'); endif; $sms_sql .= ' GROUP BY type, is_auto'; $news_sql .= ' GROUP BY type, is_auto'; return $this->_em->createNativeQuery("$sms_sql UNION ALL $news_sql", $rsm) >setParameters($params)->getResult(); } And this throws the exception: SQLSTATE[HY093]: Invalid parameter number: mixed named and positional parameters Array $arams is OK and so generated SQL: var_dump($params); array (size=2) 'user_id' => int 1 'start' => string '2012-01-01' (length=10) Strangest thing is that it works with "$sms_sql" only! Any help would make my day, thanks. Update Found another strange thing. If i change only the name (to start_date instead of start): if($start) : $sms_sql .= " AND sent_at >= :start_date"; $news_sql .= " AND sent_at >= :start_date"; $params['start_date'] = $start->format('Y-m-d'); endif; What happens is that Doctrine/PDO says: SQLSTATE[42S22]: Column not found: 1054 Unknown column 'sent1rt_date' in 'where clause' ... as string 1rt was added in the middle of the column name! Crazy...

    Read the article

  • Regular expression works normally, but fails when placed in an XML schema

    - by Eli Courtwright
    I have a simple doc.xml file which contains a single root element with a Timestamp attribute: <?xml version="1.0" encoding="utf-8"?> <root Timestamp="04-21-2010 16:00:19.000" /> I'd like to validate this document against a my simple schema.xsd to make sure that the Timestamp is in the correct format: <?xml version="1.0" encoding="utf-8"?> <xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="root"> <xs:complexType> <xs:attribute name="Timestamp" use="required" type="timeStampType"/> </xs:complexType> </xs:element> <xs:simpleType name="timeStampType"> <xs:restriction base="xs:string"> <xs:pattern value="(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}" /> </xs:restriction> </xs:simpleType> </xs:schema> So I use the lxml Python module and try to perform a simple schema validation and report any errors: from lxml import etree schema = etree.XMLSchema( etree.parse("schema.xsd") ) doc = etree.parse("doc.xml") if not schema.validate(doc): for e in schema.error_log: print e.message My XML document fails validation with the following error messages: Element 'root', attribute 'Timestamp': [facet 'pattern'] The value '04-21-2010 16:00:19.000' is not accepted by the pattern '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}'. Element 'root', attribute 'Timestamp': '04-21-2010 16:00:19.000' is not a valid value of the atomic type 'timeStampType'. So it looks like my regular expression must be faulty. But when I try to validate the regular expression at the command line, it passes: >>> import re >>> pat = '(0[0-9]{1})|(1[0-2]{1})-(3[0-1]{1}|[0-2]{1}[0-9]{1})-[2-9]{1}[0-9]{3} ([0-1]{1}[0-9]{1}|2[0-3]{1}):[0-5]{1}[0-9]{1}:[0-5]{1}[0-9]{1}.[0-9]{3}' >>> assert re.match(pat, '04-21-2010 16:00:19.000') >>> I'm aware that XSD regular expressions don't have every feature, but the documentation I've found indicates that every feature that I'm using should work. So what am I mis-understanding, and why does my document fail?

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • POD global object initialization

    - by paercebal
    I've got bitten today by a bug. The following source can be copy/pasted (and then compiled) into a main.cpp file #include <iostream> // The point of SomeGlobalObject is for its // constructor to be launched before the main // ... struct SomeGlobalObject { SomeGlobalObject() ; } ; // ... // Which explains the global object SomeGlobalObject oSomeGlobalObject ; // A POD... I was hoping it would be constructed at // compile time when using an argument list struct MyPod { short m_short ; const char * const m_string ; } ; // declaration/Initialization of a MyPod array MyPod myArrayOfPod[] = { { 1, "Hello" }, { 2, "World" }, { 3, " !" } } ; // declaration/Initialization of an array of array of void * void * myArrayOfVoid[][2] = { { (void *)1, "Hello" }, { (void *)2, "World" }, { (void *)3, " !" } } ; // constructor of the global object... Launched BEFORE main SomeGlobalObject::SomeGlobalObject() { std::cout << "myArrayOfPod[0].m_short : " << myArrayOfPod[0].m_short << std::endl ; std::cout << "myArrayOfVoid[0][0] : " << myArrayOfVoid[0][0] << std::endl ; } // main... What else ? int main(int argc, char* argv[]) { return 0 ; } MyPod being a POD, I believed there would be no constructors. Only initialization at compile time. Thus, the global object SomeGlobalObject would have no problem to use the global array of PODs upon its construction. The problem is that in real life, nothing is so simple. On Visual C++ 2008 (I did not test on other compilers), upon execution myArrayOfPodis not initialized, even ifmyArrayOfVoid` is initialized. So my questions is: Are C++ compilers not supposed to initialize global PODs (including POD structures) at compilation time ? Note that I know global variable are evil, and I know that one can't be sure of the order of creation of global variables declared in different compilation units. The problem here is really the POD C-like initialization which seems to call a constructor (the default, compiler-generated one?). And to make everyone happy: This is on debug. On release, the global array of PODs is correctly initialized.

    Read the article

  • Problem getting ar_mailer/ar_sendmail working on new server

    - by Max Williams
    Hey all. I've got a new app up and running on a new ubuntu server. It's working fine generally but i can't get ar_sendmail working. I'm following the instructions on this page: http://www.ameravant.com/posts/sending-tons-of-emails-in-ruby-on-rails-with-ar_mailer The setup is all done, ie i can "deliver mails" which just saves records in my Email table. Now i want to get the ar_sendmail daemon running to actually send them. (so i'm at 'Running ar_sendmail in daemon mode' in that web page). First thing: ar_sendmail --mailq >>ar_sendmail: command not found Ok...so, where is ar_sendmail? I have a look and there's an ar_sendmail file in the bin folder of the ar_mailer plugin, so i add the location of that to my path. I don't know if this was the right thing to do or not. Ok, so try again. ar_sendmail --mailq /var/www/apps/millionaire/vendor/plugins/ar_mailer/bin/ar_sendmail:3:in `require': no such file to load -- action_mailer/ar_sendmail (LoadError) from /var/www/apps/millionaire/vendor/plugins/ar_mailer/bin/ar_sendmail:3 hmm. Here's the offending file, there's not much there. #!/usr/bin/env ruby require 'action_mailer/ar_sendmail' ActionMailer::ARSendmail.run ok...so it literally is just trying to require this and can't find it. The file, action_mailer/ar_sendmail.rb is in the ar_mailer plugin, in it's lib folder. So, given that it's being called from inside the plugin, it should be able to see this right? I've got a feeling that i'm way off the track here and have missed something simple. Can anyone set me straight? I'm using rails 2.3.4 in case that's relevant. EDIT - i just realised something kind of dumb: when i call ar_sendmail from the command line like this, i'm just loading that one file, which doesn't know where it's supposed to look for the rest of the stuff, i think. Which really makes me think that i'm not trying to run the right thing. Is the ar_sendmail daemon a seperate program altogether, that i would get with apt_get or something? EDIT2 - i made some progress by installing the ar_mailer gem (which the guide said i shouldn't do) and that does seem to run. It's sending some mail request somewhere and clearing the Email table of pending emails. Running ar_sendmail in -ov (oneshot verbal) mode i see it report this for example: sent email 00000000019 from [email protected] to [email protected]: # So, it actually looks like it's working now and i just need to set up the ACTUAL THING WHICH SENDS EMAILS. sigh. still grateful for any advice. thanks, max

    Read the article

  • Defining where on the page the flowdocument I am printing will 'start' and 'end'

    - by Sagi1981
    Dear community. I am almost done with implementing a printing functionality, but I am having trouble getting the last hurdle done with. My problem is, that I am printing some reports, consisting of a header (with information about the person the report is about), a footer (with a page number) and the content in the middle, which is a FlowDocument. Since the flowdocuments can be fairly long, It is very possible that they will span multiple pages. My approach is to make a custom FlowDocumentPaginator which derives from DocumentPaginator. In there i define my header and my footer. However, when I print my page, the flowdocument and my header and footer are on top of eachother. So my question is plain and simple - how do I define from where and to where the flowdocument part on the pages will be placed? here is the code from my custommade Paginator: public class HeaderedFlowDocumentPaginator : DocumentPaginator { private DocumentPaginator flowDocumentpaginator; public HeaderedFlowDocumentPaginator(FlowDocument document) { flowDocumentpaginator = ((IDocumentPaginatorSource) document).DocumentPaginator; } public override bool IsPageCountValid { get { return flowDocumentpaginator.IsPageCountValid; } } public override int PageCount { get { return flowDocumentpaginator.PageCount; } } public override Size PageSize { get { return flowDocumentpaginator.PageSize; } set { flowDocumentpaginator.PageSize = value; } } public override IDocumentPaginatorSource Source { get { return flowDocumentpaginator.Source; } } public override DocumentPage GetPage(int pageNumber) { DocumentPage page = flowDocumentpaginator.GetPage(pageNumber); ContainerVisual newVisual = new ContainerVisual(); newVisual.Children.Add(page.Visual); DrawingVisual header = new DrawingVisual(); using (DrawingContext dc = header.RenderOpen()) { //Header data } newVisual.Children.Add(header); DrawingVisual footer = new DrawingVisual(); using (DrawingContext dc = footer.RenderOpen()) { Typeface typeface = new Typeface("Trebuchet MS"); FormattedText text = new FormattedText("Page " + (pageNumber + 1).ToString(), CultureInfo.CurrentCulture, FlowDirection.LeftToRight, typeface, 14, Brushes.Black); dc.DrawText(text, new Point(page.Size.Width - 100, page.Size.Height-30)); } newVisual.Children.Add(footer); DocumentPage newPage = new DocumentPage(newVisual); return newPage; } } And here is the printdialogue call: private void btnPrint_Click(object sender, RoutedEventArgs e) { try { PrintDialog printDialog = new PrintDialog(); if (printDialog.ShowDialog() == true) { FlowDocument fd = new FlowDocument(); MemoryStream stream = new MemoryStream(ASCIIEncoding.Default.GetBytes(<My string of text - RTF formatted>)); TextRange tr = new TextRange(fd.ContentStart, fd.ContentEnd); tr.Load(stream, DataFormats.Rtf); stream.Close(); fd.ColumnWidth = printDialog.PrintableAreaWidth; HeaderedFlowDocumentPaginator paginator = new HeaderedFlowDocumentPaginator(fd); printDialog.PrintDocument(paginator, "myReport"); } } catch (Exception ex) { //Handle } }

    Read the article

  • TFS CM resource recommendations / some questions

    - by John
    I am working with a small development shop that consists of a group of 5 developers and 1 QA person. We are using TFS and need to get more sophisticated on how we use this tool. Currently the development team checks in their code each evening. A nightly build runs and pushes the output out on a network share. Our QA person uses this build for testing the next day. Sometimes the build off the trunk codebase has issues/bugs that hinder the QA process, and it hasn’t been a giant issue in the past, but we now want to get to a state where we have our QA person testing on a stable QA build. So I believe we need to create a branch (call it QA), and the developers will continue to develop off the trunk, but the QA person will use builds created from code in the QA branch. Seems simple enough, but we have started doing code reviews as well. So we have another desire in that only code that has been code reviewed can be promoted to the QA branch. Each developer works off a TFS item, and when they check in a changeset, they do it against a TFS item which creates a link between a checked in code file and a TFS item. Eventually the TFS item becomes complete and ready for code review. All code attached to the TFS item is reviewed. How can the versions of these files get promoted to the QA branch? In the QA branch, if a bug is found, we want to fix it in the QA branch and have the changes migrated back to the trunk. I believe TFS has a way to automatically do this doesn’t it? Long story short, we want to get to a build and CM environment that I believe is pretty standard, but we are unaware of how to make this happen with TFS. Given our situation above, can someone point out a book or website(s) that would address our specific needs? We would like to make this happen without having to get too deep in CM theory or TFS. I very much appreciate any and all suggestions! Thanks, John

    Read the article

  • One entityManger finds entity , the other does not.

    - by Pitelk
    Hi all, I have a very strange behavior in my program. I have 2 classes (class LogIn and CreateGame) where i have injected an EntityManager in each using the annotation @PersistenceContext(unitName="myUnitPU") EntityManager entitymanger; In some point i remove an object called "user" from the database using entitymanger.remove(user) from a method in LogIn class. The business logic is that a user can host and join games ( in the same time) so removing the user all the entries in database about the games the user has created are removed and all the entries showing in which games the user has joined are removed also. After that, i call another function which checks if the user exists using a method in the LogIn class entitymanager.find(user) which surprisingly enough, finds the user. After that I call a method in CreateGame class which tries to find the user by using again entitymanger.find(user) the entitymanger in that class fails to find the user (which is the expected result as the user is removed and it's not in the database) So the question is : Why the entitymanager in one class finds the user (which is wrong) where the other doesn't find it? Does anyone has ever the same problem? PS : This "bug" occurs when the user has hosted a game which is joined by another user (lets call him Buser) and the Buser has made a game which is joined by the current user. GAME | HOST | CLIENTS game1 | user | userB game2 | userB | user where in this case by removing the user, the game1 is deleted and the user is removed from game2 so the result is GAME | HOST | CLIENTS game2 | userB | PS2 : The Beans are EJB3.0. The methods are called from a delegate class. The beans in the delegate class are instantiated using the InitialContext.lookup() method. Note that for logging in ,creating , joining games the appropriate delegate class calls the correspondent EJB which does the transactions. In the case of logOut, the delegate calls an EJB to logout the user but becuase other stuff must be done (as said above) this EJB calls other EJB (again using lookup() ) which has methods like removegame(), removeUserFromGame() etc. After those methods are executed the user is then logged out. Maybe it has something to do with the fact the the first entity manager is called by a delegate but the second from inside an EJb and thats why the one entitymanger can see the non-existent user while the other cannot? Also all the methods have TRANSACTIONTYPE.REQUIRED Thank you in advance

    Read the article

  • Why would some POST data go missing when using Uploadify?

    - by Chad Johnson
    I have been using Uploadify in my PHP application for the last couple months, and I've been trying to track down an elusive bug. I receive emails when fatal errors occur, and they provide me a good amount of details. I've received dozens of them. I have not, however, been able to reproduce the problem myself. Some users (like myself) experience no problem, while others do. Before I give details of the problem, here is the flow. User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. Now that's what SHOULD happen. I've received reports of the upload freezing at 100% and also others where "I/O Error" is displayed. What's happening is, the form is submitting with the completion callback, but some post parameters present in the form are simply not in the post data. The id for the page, which earlier I said is added to the form as a hidden field, is simply not there in the post data ($_POST)--there is no item for 'id' in the $_POST array. The strange thing is, the post data DOES contain values for some fields. For instance, I have an input of type text called "name" which is for the record name, and it does show up in the post data. Here is what I've gathered: This has been happening on Mac OSX 10.5 and 10.6, Windows XP, and Windows 7. I can post exact user agent strings if that helps. Users must use Flash 10.0.12 or later. We've made it so the form reverts to using a normal "file" field if they have < 10.0.12. Does anyone have ANY ideas at all what the cause of this could be?

    Read the article

  • Experienced developer trying to get outsourcing contract with current client.

    - by Mike
    I work for a major bank as a contract software developer. I've been there a few months, and without exception this place has the worst software practices I've ever seen. The software my team makes has no formal testing, terrible code (not reusable, hard to read, etc), minimal documentation, no defined development process and an absolutely sickening amount of waste due to bureaucratic overhead. Part of my contract is to maintain a group of thousands of very poorly written batch jobs. When one of the jobs fails (read: crashes), it's a developers job to look at the source, figure out what's wrong, fix it, and check it in. There is no quality assurance process or auditing of the results whatsoever. Once the developer says "it works" a manager signs off and it goes into production. What's disturbing is that these jobs essentially grab market data and put it into a third-party risk management system, which provides the bank with critical intelligence. I've discovered the disturbing truth that this has been happening since the 90s and nobody really has evidence the system is getting the correct data! Without going into details, an issue arose on Friday that was so horrible I actually stormed out of the place. I was ready to quit, but I decided to just get out to calm my nerves and possibly go back Monday. I've been reflecting today on how to handle this. I have realized that, in probably less than 6 months, I could (with 2 other developers) remake a large component of this system. The new system would provide them with, as primary benefits, a maintainable codebase less prone to error and a solid QA framework. To do it properly I would have to be outside the bank, the internal bureaucracy is just too much. And moreover, I think a bank is fundamentally not a place that can make good software. This is my plan. Write a report explaining in depth all the problems with their current system Explain why their software practices fail and generate a tremendous amount of error and waste. Use this as the basis for claiming the project must be developed externally. Write a high level development plan, including what resources I will require Hand 1,2,3 to my manager, hopefully he passes it up the chain. Worst case he fires me, but this isn't so bad. Convinced Executive decides to award my company a contract for the new system I have 8 years experience as a software contractor and have delivered my share of successful software products, but all working in-house for small/medium sized companies. When I read this over, I think I have a dynamite plan. But since this is the first time doing something this bold so I have my doubts. My question is, is this a good idea? If you think not, please spare no detail.

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • ASP.NET MVC - PartialView html not changing via jQuery html() call

    - by Bryan Roth
    When I change the selection in a DropDownList, a PartialView gets updated via a GET request. When updating the PartialView via the jQuery html() function, the html returned is correct but when it displayed in the browser it is not correct. For example, certain checkboxes within the PartialView should become enabled but they remain disabled even though the html returned says they should be. When I do a view source in the browser the html never gets updated. I'm a little perplexed. Thoughts? Search.aspx <%@ Page Title="" Language="VB" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Search </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <script type="text/javascript"> $(document).ready(function () { $("#Sections").change(function () { var section = $("#Sections").val(); var township = $("#Townships").val(); var range = $("#Ranges").val(); $.get("Search/Search?section=" + section + "&township=" + township + "&range=" + range, function (response) { $("#cornerDiv").html(response) }); }); }); </script> <h2>Search</h2> <%--The line below is a workaround for a VB / ASPX designer bug--%> <%=""%> <% Using Ajax.BeginForm("Search", New AjaxOptions With {.UpdateTargetId = "searchResults", .LoadingElementId = "loader"})%> Township <%= Html.DropDownList("Townships")%> Range <%= Html.DropDownList("Ranges")%> Section <%= Html.DropDownList("Sections")%> <div id="cornerDiv"> <% Html.RenderPartial("Corners")%> </div> <input type="submit" value="Search" /> <span id="loader">Searching...</span> <% End Using%> <div id="searchResults"></div> </asp:Content>

    Read the article

  • Revision control for writing programming lessons

    - by Dietrich Epp
    I'd like to write a series programming lessons that guide programmers to build a certain kind of program. After each lesson, I'd like to provide sample code that implements what that lesson covered, and the next lesson would use that code as a starting point. Right now I'm using Git to keep track of the code from lesson to lesson. Each lesson has its own branch. lesson1: A--B--C \ lesson2: D--E--F \ lesson3: G--H--I However, suppose that now I want to make it easier on the Windows programmers using my lessons, so I add a Visual Studio project to lesson 1 and then merge it into lessons 2 and 3. lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K \ \ lesson3: G--H--I--L And then someone points out a bug in lesson 2 that causes crashes on certain systems. (This diagram is where I am right now, and I'm having doubts about continuing along this path.) lesson1: A--B--C--------------J \ \ lesson2: D--E--F--------K--M \ \ \ lesson3: G--H--I--L--N Here are the problems I imagine having: If I had many lessons, and I fix something in lesson 1, am I going to have to spend fifteen minutes or more just merging that one simple change? I know I'll probably have to test all of those lessons again, but I can put that off. When I make a bunch of changes to various lessons on one computer, how do I pull all of the branches at the same time? If I decide to publish these lessons, I'd like a way to tag all of the branches to correspond with what I publish. I figure I'll just need to tag each branch separately, but it would be nice if there were a better way. When I look at the history, I imagine becoming terribly confused about what I've done. Compare the above diagram to a hypothetical diagram below, where I use rebase instead of merge (and rebase has its own problems): lesson1: A--B--C--J \ lesson2: D2--E2--F2--M \ lesson3: G2--H2--I2 Do any of you have experience working with a project like this? Should I consider using a different VCS, such as Darcs? (Note: it would be a real pain to use centralized VCS, so don't suggest one of those unless the benefits are clear.) Should I consider writing plugins or extra tools for a VCS (such as a "meta tag" which tags several branches)?

    Read the article

  • How to write this JavaScript code without eval?

    - by karlthorwald
    How to write this JavaScript code without eval? var typeOfString = eval("typeof " + that.modules[modName].varName); if (typeOfString !== "undefined") { doSomething(); } The point is that the name of the var that I want to check for is in a string. Maybe it is simple but I don't know how. Edit: Thank you for the very interesting answers so far. I will follow your suggestions and integrate this into my code and do some testing and report. Could take a while. Edit2: I had another look at the could and maybe itis better I show you a bigger picture. I am greatful for the experts to explain so beautiful, it is better with more code: MYNAMESPACE.Loader = ( function() { function C() { this.modules = {}; this.required = {}; this.waitCount = 0; this.appendUrl = ''; this.docHead = document.getElementsByTagName('head')[0]; } function insert() { var that = this; //insert all script tags to the head now! //loop over all modules: for (var modName in this.required) { if(this.required.hasOwnProperty(modName)){ if (this.required[modName] === 'required') { this.required[modName] = 'loading'; this.waitCount = this.waitCount + 1; this.insertModule(modName); } } } //now poll until everything is loaded or //until timout this.intervalId = 0; var checkFunction = function() { if (that.waitCount === 0) { clearInterval(that.intervalId); that.onSuccess(); return; } for (var modName in that.required) { if(that.required.hasOwnProperty(modName)){ if (that.required[modName] === 'loading') { var typeOfString = eval("typeof " + that.modules[modName].varName); if (typeOfString !== "undefined") { //module is loaded! that.required[modName] = 'ok'; that.waitCount = that.waitCount - 1; if (that.waitCount === 0) { clearInterval(that.intervalId); that.onSuccess(); return; } } } } } }; //execute the function twice a second to check if all is loaded: this.intervalId = setInterval(checkFunction, 500); //further execution will be in checkFunction, //so nothing left to do here } C.prototype.insert = insert; //there are more functions here... return C; }()); var myLoader = new MYNAMESPACE.Loader(); //some more lines here... myLoader.insert();

    Read the article

  • Can I someone point to me what I did wrong? Trying to map VB to Java using JNA to access the library

    - by henry
    Original Working VB_Code Private Declare Function ConnectReader Lib "rfidhid.dll" () As Integer Private Declare Function DisconnectReader Lib "rfidhid.dll" () As Integer Private Declare Function SetAntenna Lib "rfidhid.dll" (ByVal mode As Integer) As Integer Private Declare Function Inventory Lib "rfidhid.dll" (ByRef tagdata As Byte, ByVal mode As Integer, ByRef taglen As Integer) As Integer Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Dim desc As String desc = "1. Click ""Connect"" to talk to reader." & vbCr & vbCr desc &= "2. Click ""RF On"" to wake up the TAG." & vbCr & vbCr desc &= "3. Click ""Read Tag"" to get tag PCEPC." lblDesc.Text = desc End Sub Private Sub cmdConnect_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdConnect.Click If cmdConnect.Text = "Connect" Then If ConnectReader() Then cmdConnect.Text = "Disconnect" Else MsgBox("Unable to connect to RFID Reader. Please check reader connection.") End If Else If DisconnectReader() Then cmdConnect.Text = "Connect" End If End If End Sub Private Sub cmdRF_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdRF.Click If cmdRF.Text = "RF On" Then If SetAntenna(&HFF) Then cmdRF.Text = "RF Off" End If Else If SetAntenna(&H0) Then cmdRF.Text = "RF On" End If End If End Sub Private Sub cmdReadTag_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles cmdReadTag.Click Dim tagdata(64) As Byte Dim taglen As Integer, cnt As Integer Dim pcepc As String pcepc = "" If Inventory(tagdata(0), 1, taglen) Then For cnt = 0 To taglen - 1 pcepc &= tagdata(cnt).ToString("X2") Next txtPCEPC.Text = pcepc Else txtPCEPC.Text = "ReadError" End If End Sub Java Code (Simplified) import com.sun.jna.Library; import com.sun.jna.Native; public class HelloWorld { public interface MyLibrary extends Library { public int ConnectReader(); public int SetAntenna (int mode); public int Inventory (byte tagdata, int mode, int taglen); } public static void main(String[] args) { MyLibrary lib = (MyLibrary) Native.loadLibrary("rfidhid", MyLibrary.class); System.out.println(lib.ConnectReader()); System.out.println(lib.SetAntenna(255)); byte[] tagdata = new byte[64]; int taglen = 0; int cnt; String pcepc; pcepc = ""; if (lib.Inventory(tagdata[0], 1, taglen) == 1) { for (cnt = 0; cnt < taglen; cnt++) pcepc += String.valueOf(tagdata[cnt]); } } } The error happens when lib.Inventory is run. lib.Inventory is used to get the tag from the RFID reader. If there is no tag, no error. The error code An unexpected error has been detected by Java Runtime Environment: EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x0b1d41ab, pid=5744, tid=4584 Java VM: Java HotSpot(TM) Client VM (11.2-b01 mixed mode windows-x86) Problematic frame: C [rfidhid.dll+0x141ab] An error report file with more information is saved as: C:\eclipse\workspace\FelmiReader\hs_err_pid5744.log

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >