Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 966/985 | < Previous Page | 962 963 964 965 966 967 968 969 970 971 972 973  | Next Page >

  • shopify_app syntax error

    - by Pete171
    Edit: Debugging has got me further. Question clarified. We have installed Ruby, RubyGems and Rails and have forked the shopify_app project. We have created a new rails applications and added three items to the Gemfile: execjs, therubyracer and shopify_app. Running rails s in order to start our rails application returns this trace: root@ubuntu:/usr/local/pete-shopify/cart# rails s Faraday: you may want to install system_timer for reliable timeouts /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15:in `require': /var/lib /gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app/login_protection.rb:5: syntax error, unexpected ':', expecting kEND (SyntaxError) ...rce::UnauthorizedAccess, with: :close_session ^ from /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb:15 from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:68:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:66:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `each' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler/runtime.rb:55:in `require' from /var/lib/gems/1.8/gems/bundler-1.2.1/lib/bundler.rb:128:in `require' from /usr/local/pete-shopify/cart/config/application.rb:7 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53:in `require' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:53 from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50:in `tap' from /var/lib/gems/1.8/gems/railties-3.2.8/lib/rails/commands.rb:50 from script/rails:6:in `require' from script/rails:6 I haven't modified any files since forking from Github. Lines 1 - 6 of login_protection.rb are as follows: module ShopifyApp::LoginProtection extend ActiveSupport::Concern included do rescue from ActiveResource::UnauthorizedAccess, with: :close_session end I've looked into this and it seems that the error is caused by a new-style hash syntax between Ruby 1.8 and 1.9; key : value instead of key => value. Running ruby -v from the command line returns ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux]. This would seem to be OK... but I did some debugging, and inside the file /var/lib/gems/1.8/gems/shopify_app-4.1.0/lib/shopify_app.rb (at the top) by putting this: puts RUBY_VERSION exit It printed 1.8.7. **Why are ruby -v and RUBY_VERSION giving me different results? And am I correct in assuming this is the cause of my problems? Note: To upgrade Ruby I installed the later version with apt-get and then switched to it by using update-alternatives --config ruby and selecting option 2 like this: root@ubuntu:/usr/local/pete-shopify/cart# update-alternatives --config ruby There are 2 choices for the alternative ruby (providing /usr/bin/ruby). Selection Path Priority Status ------------------------------------------------------------ 0 /usr/bin/ruby1.8 50 auto mode 1 /usr/bin/ruby1.8 50 manual mode * 2 /usr/bin/ruby1.9.1 10 manual mode Also note: We're PHP/Python developers so this is all new to us! Summary: 1 - Am I right in determining the cause of the syntax error? 2 - Why does RUBY_VERSION and ruby -v give me different results?

    Read the article

  • UNIX: Replace Newline w/ Colon, Preserving Newline Before EOF

    - by Maarx
    I have a text file ("INPUT.txt") of the format: A<LF> B<LF> C<LF> D<LF> X<LF> Y<LF> Z<LF> <EOF> which I need to reformat to: A:B:C:D:X:Y:Z<LF> <EOF> I know you can do this with 'sed'. There's a billion google hits for doing this with 'sed'. But I'm trying to emphasis readability, simplicity, and using the correct tool for the correct job. 'sed' is a line editor that consumes and hides newlines. Probably not the right tool for this job! I think the correct tool for this job would be 'tr'. I can replace all the newlines with colons with the command: cat INPUT.txt | tr '\n' ':' There's 99% of my work done. I have a problem, now, though. By replacing all the newlines with colons, I not only get an extraneous colon at the end of the sequence, but I also lose the carriage return at the end of the input. It looks like this: A:B:C:D:X:Y:Z:<EOF> Now, I need to remove the colon from the end of the input. However, if I attempt to pass this processed input through 'sed' to remove the final colon (which would now, I think, be a proper use of 'sed'), I find myself with a second problem. The input is no longer terminated by a newline at all! 'sed' fails outright, for all commands, because it never finds the end of the first line of input! It seems like appending a newline to the end of some input is a very, very common task, and considering I myself was just sorely tempted to write a program to do it in C (which would take about eight lines of code), I can't imagine there's not already a very simple way to do this with the tools already available to you in the Linux kernel.

    Read the article

  • Phantomjs creating black output from SVG using page.render

    - by Neil Young
    I have been running PhantomJS 1.9.6 happily on a turnkey Linux server for about 4 months now. Its purpose is to take an SVG file and create different sizes using the page.render function. This has been doing this but since a few days ago has started to generate a black mono output. Please see below: The code: var page = require('webpage').create(), system = require('system'), address, output, ext, width, height; if ( system.args.length !== 4 ) { console.log("{ \"result\": false, \"message\": \"phantomjs.rasterize: error need address, output, extension arguments\" }"); //console.log('phantomjs.rasterize: error need address, output, extension arguments'); phantom.exit(1); } else if( system.args[3] !== "jpg" && system.args[3] !== "png"){ console.log("{ \"result\": false, \"message\": \"phantomjs.rasterize: error \"jpg\" or \"png\" only please\" }"); //console.log('phantomjs.rasterize: error "jpg" or "png" only please'); phantom.exit(1); } else { address = system.args[1]; output = system.args[2]; ext = system.args[3]; width = 1044; height = 738; page.viewportSize = { width: width, height: height }; //postcard size page.open(address, function (status) { if (status !== 'success') { console.log("{ \"result\": false, \"message\": \"phantomjs.rasterize: error loading address ["+address+"]\" }"); //console.log('phantomjs.rasterize: error loading address ['+address+'] '); phantom.exit(); } else { window.setTimeout(function () { //--> redner full size postcard page.render( output + "." + ext ); //--> redner smaller postcard page.zoomFactor = 0.5; page.clipRect = {top:0, left:0, width:width*0.5, height:height*0.5}; page.render( output + ".50." + ext); //--> redner postcard thumb page.zoomFactor = 0.25; page.clipRect = {top:0, left:0, width:width*0.25, height:height*0.25}; page.render( output + ".25." + ext); //--> exit console.log("{ \"result\": true, \"message\": \"phantomjs.rasterize: success ["+address+"]>>["+output+"."+ext+"]\" }"); //console.log('phantomjs.rasterize: success ['+address+']>>['+output+'.'+ext+']'); phantom.exit(); }, 100); } }); } Does anyone know what can be causing this? There have been no server configuration changes that I know of. Many thanks for your help.

    Read the article

  • Spring/RMI server error

    - by 4herpsand7derpsago
    We have a Spring MVC web app (WAR) deploying to Tomcat (6.0.35) that launches a thread inside a separate JVM at deploy time (don't ask why - not my design) and then communicates with that thread via RMI over port 8888. Despite being totally convoluded, this was working perfectly fine up until yesterday, and now the thread is failing at startup and despite our best efforts to add logging into the mix, we are hitting a wall. This is the only exception we are able to find in the logs: Jun 12, 2012 3:11:36 AM com.ourapp.ImageController destroy SEVERE: Shutdown Error: Lookup of RMI stub failed; nested exception is java.rmi.ConnectException: Connection refused to host: localhost; nested exception is: java.net.ConnectException: Connection refused Jun 12, 2012 3:11:37 AM org.apache.catalina.core.StandardContext listenerStop SEVERE: Exception sending context destroyed event to listener instance of class org.springframework.web.context.ContextLoaderListener java.lang.NoClassDefFoundError: org/springframework/web/context/ContextCleanupListener at org.springframework.web.context.ContextLoaderListener.contextDestroyed(ContextLoaderListener.java:80) at org.apache.catalina.core.StandardContext.listenerStop(StandardContext.java:3973) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4577) at org.apache.catalina.startup.HostConfig.checkResources(HostConfig.java:1165) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1271) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:296) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.core.ContainerBase.backgroundProcess(ContainerBase.java:1337) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1601) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.processChildren(ContainerBase.java:1610) at org.apache.catalina.core.ContainerBase$ContainerBackgroundProcessor.run(ContainerBase.java:1590) at java.lang.Thread.run(Thread.java:662) Caused by: java.lang.ClassNotFoundException: org.springframework.web.context.ContextCleanupListener at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1387) at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1233) ... 12 more The ImageController is the Spring MVC Controller that is responsible for kicking off this daemon/spawned RMI thread. Based on the verbage of this error, does anybody have any idea what might be causing this "connection refused" error? Running a netstat -an | grep 8888 (this is a Linux machine) produces no output which means nothing is listening on that port. Thanks in advance for any ideas/suggestions that lead to a fix. Edit: Here's another ConnectionException we're seeing: Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.<init>(Socket.java:375) at java.net.Socket.<init>(Socket.java:189) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 74 more

    Read the article

  • Coldfusion Report Builder - How can you set different datasources externally between prod/staging/de

    - by Smooth Operator
    Coldfusion Report Builder is great. One small issue. We use ANT+CFANT to deploy. When we create the report, say in a datasource called MyApp_dev on a dev box. Everything works great when the report is created. We deploy the report to our staging server, which has a datasource of MyApp_Staging. That server also, may or may not, have the live app working under MyApp_Live. Ant pushes the update to Staging just great. Run the report, crashes and burns. Why? It seems the report is looking for the MyApp_Dev data_source, even though the application is using the MyApp_Staging datasource. In digging around I found a few approaches, I would like to do this one, final, ideal way from the beginning instead of having to go back to do dozens of reports differently when I have a new Aha! moment. 1) Obvious: Pass in the datasource in to the cfreport tag. Doesn't work for ColdFusion Builder Reports as of v8, or v9 as tested on Linux. 2) Most realistic option (but painful) so far: Pass in the query as an object into the ColdFusion Builder report. Let's think about this: Create the Report with the report builder to my heart's content using the RDS, etc on my local box. When I'm done, copy the query into a snippet of code, or into a database column to be dynamically be injected at runtime with correct datasource. Modify my "run report" event to find the query from the database column, insert it into another dynamic cfquery and potentially... evaluate (!?!) it? Fun side is I can set the cfquery datasource to what I would need for each environment. When I modify the report's columns in CF Report Builder, I always have to update the query in the database. Is there a snippet of code that can extract this for me? Hmm. 3) Less than ideal. Suck it up and let all the reports in staging run off the live server. Maybe copy the live data into staging (sans structural changes) to let it seem similar. Are there any eloquent ways to accomplish the above? Thanks in Advance!

    Read the article

  • During npm install socket.io I get error 127, node-waf command not found. How to solve it?

    - by SandyWeb
    I'm trying to install socket.io on centos 5 with node.js package manager. During installation I got an error: "make: node-waf: Command not found" and "This is most likely a problem with the ws package" # npm install socket.io npm http GET https://registry.npmjs.org/socket.io npm http 304 https://registry.npmjs.org/socket.io npm http GET https://registry.npmjs.org/policyfile/0.0.4 npm http GET https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/policyfile/0.0.4 npm http 304 https://registry.npmjs.org/socket.io-client/0.9.2 npm http 304 https://registry.npmjs.org/redis/0.6.7 npm http GET https://registry.npmjs.org/uglify-js/1.2.5 npm http GET https://registry.npmjs.org/ws npm http GET https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http GET https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http 304 https://registry.npmjs.org/xmlhttprequest/1.2.2 npm http 304 https://registry.npmjs.org/uglify-js/1.2.5 npm http 304 https://registry.npmjs.org/ws npm http 304 https://registry.npmjs.org/active-x-obfuscator/0.0.1 npm http GET https://registry.npmjs.org/zeparser/0.0.5 > [email protected] preinstall /root/node_modules/socket.io/node_modules/socket.io-client/node_modules/ws > make **node-waf configure build make: node-waf: Command not found make: *** [all] Error 127** npm ERR! [email protected] preinstall: `make` npm ERR! `sh "-c" "make"` failed with 2 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the ws package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! make npm ERR! You can get their info via: npm ERR! npm owner ls ws npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 2.6.18-194.17.4.el5 npm ERR! command "node" "/usr/bin/npm" "install" "socket.io" npm ERR! cwd /root npm ERR! node -v v0.6.13 npm ERR! npm -v 1.1.10 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `make` npm ERR! message `sh "-c" "make"` failed with 2 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /root/npm-debug.log npm not ok What is "node-waf" and how can I solve this problem? Thanks!

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • Job queue manager with RPC interface

    - by admr
    I need a job queue manager that I can control over the Internet. It should be able to execute and stop processes, check on their status (ideally notice and execute some code when a process exits), respond to commands and also be able to report back to a server. Background: I have a GWT application that allows to create jobs to execute on a cloud instance (currently EC2). I want to push a "job packet" (data for a process to operate on etc) to S3, start a Linux EC2 instance (or use one that's already running), and tell a job manager on the instance to execute that job (possibly parallel to other jobs). It should then pull the "job packet" from S3, run a process that operates on that data and report back to the server that is running the server part of my GWT application with some information (e.g. exit code, stdout, stderr). If I have to write e.g. stdour/err to a file from the process and read that file, that's OK too. I would really like the manager to be "close" to the processes it runs, meaning I want to avoid using something like Runtime.exec from the JDK. It seems like I would have to do that if I used Quartz for example. I'm fine with the calls in both directions being asynchronous. I'm fine with any reasonable technology for the calls as long as I can easily build an interface for that in my GWT server side (e.g. HTTP requests to a servlet over SSL would be nice and trivial). The job manager does not need to have a very sophisticated queueing system. Running several processes either sequentially or in parallel should be fine. Determining how much compute time a process received during its lifetime would be nice (AFAIK, this might be challenging). I did not yet find any existing software that does this, including http://java-source.net/open-source/job-schedulers. I suspect I might have to build an RPC interface (with authentication etc, of course) around a job manager; maybe use something like Apache Commons Exec. In that case, I would prefer Java or Python for the job manager part. I would be happy to hear suggestions for either the former or latter scenario!

    Read the article

  • How to load a springframework ApplicationContext from Jython

    - by staticman
    I have a class that loads a springframework application context like so: package com.offlinesupport; import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext; public class OfflineScriptSupport { private static ApplicationContext appCtx; public static final void initialize() { appCtx = new ClassPathXmlApplicationContext( new String[] { "mycontext.spring.xml" } ); } public static final ApplicationContext getApplicationContext() { return appCtx; } public static final void main( String[] args ) { System.out.println( "Starting..." ); initialize(); System.out.println( "loaded" ); } } The class OfflineScriptSupport, and the file mycontext.spring.xml are each deployed into separate jars (along with other classes and resources in their respective modules). Lets say the jar files are OfflineScriptSupport.jar and *MyContext.jar". mycontext.spring.xml is put at the root of the jar. In a Jython script (*myscript.jy"), I try to call the initialize method to create the application context: from com.offlinesupport import OfflineScriptSupport OfflineScriptSupport.initialize(); I execute the Jython script with the following command (from Linux): jython -Dpython.path=spring.jar:OfflineScriptSupport.jar:MyContext.jar myscript.jy The Springframework application context cannot find the mycontext.spring.xml file. It displays the following error: java.io.FileNotFoundException: class path resource [mycontext.spring.xml] cannot be opened because it does not exist at org.springframework.core.io.ClassPathResource.getInputStream(ClassPathResource.java:137) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:167) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:148) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:126) at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:142) at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:113) at org.springframework.context.support.AbstractXmlApplicationContext.loadBeanDefinitions(AbstractXmlApplicationContext.java:81) at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:89) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:269) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:87) at org.springframework.context.support.ClassPathXmlApplicationContext.<init>(ClassPathXmlApplicationContext.java:72) at com.offlinesupport.OfflineScriptSupport.initialize(OfflineScriptSupport.java:27) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) If I run the jar directly from Java (using the main entry point in OfflineScriptSupport) it works and there is no error thrown. Is there something special about the way Jython handles classpaths making the Springframework's ClassPathXmlApplicationContext not work (i.e. not be able to find resource files in the classpath)?

    Read the article

  • Nasty redirect loop in WordPress (trailing slash, no trailing slash, and so on)

    - by Brett W. Thompson
    Hi, I read a ton of pages and tried lots of solutions but none have worked yet! My problem is that: test.asifa.net/asifa-wp Redirects to: test.asifa.net/asifa-wp/ Which redirects to the first page. What's a little bizarre is asifa-wp produces: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://test.asifa.net/asifa-wp/">here</a>.</p> </body></html> Whereas asifa-wp/ produces an empty page but the following headers (curl -v output): * About to connect() to test.asifa.net port 80 (#0) * Trying 69.163.203.138... connected * Connected to test.asifa.net (69.163.203.138) port 80 (#0) > GET /asifa-wp/ HTTP/1.1 > User-Agent: curl/7.18.2 (i386-redhat-linux-gnu) libcurl/7.18.2 NSS/3.12.0.3 zlib/1.2.3 libidn/0.6.14 libssh2/0.18 > Host: test.asifa.net > Accept: */* > < HTTP/1.1 301 Moved Permanently < Date: Sun, 13 Jun 2010 05:40:12 GMT < Server: Apache < X-Powered-By: PHP/5.2.13 < X-Pingback: http://test.asifa.net/asifa-wp/xmlrpc.php < Set-Cookie: _icl_current_language=en; expires=Mon, 14-Jun-2010 05:40:12 GMT; path=/asifa-wp/ < Location: http://test.asifa.net/asifa-wp < Vary: Accept-Encoding < Content-Length: 0 < Content-Type: text/html; charset=UTF-8 .htaccess looks like: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /asifa-wp/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /asifa-wp/index.php [L] </IfModule> # END WordPress Any help at all would be tremendously appreciated!!!

    Read the article

  • Middleware with generic communication media layer

    - by Tom
    Greetings all, I'm trying to implement middleware (driver) for an embedded device with generic communication media layer. Not sure what is the best way to do it so I'm seeking an advice from more experienced stackoverflow users:). Basically we've got devices around the country communicating with our servers (or a pda/laptop in used in field). Usual form of communication is over TCP/IP, but could be also using usb, RF dongle, IR, etc. The plan is to have object corresponding with each of these devices, handling the proprietary protocol on one side and requests/responses from other internal systems on the other. The thing is how create something generic in between the media and the handling objects. I had a play around with the TCP dispatcher using boost.asio but trying to create something generic seems like a nightmare :). Anybody tried to do something like that? What is the best way how to do it? Example: Device connects to our Linux server. New middleware instance is created (on the server) which announces itself to one of the running services (details are not important). The service is responsible for making sure that device's time is synchronized. So it asks the middleware what is the device's time, driver translates it to device language (protocol) and sends the message, device responses and driver again translates it for the service. This might seem as a bit overkill for such a simple request but imagine there are more complex requests which the driver must translate, also there are several versions of the device which use different protocol, etc. but would use the same time sync service. The goal is to abstract the devices through the drivers to be able to use the same service to communicate with them. Another example: we find out that the remote communications with the device are down. So we send somebody out with PDA, he connects to the device using USB cable. Starts up the application which has the same functionality as the timesync service. Again middleware instance is created (on the PDA) to translate communication between application and the device this time only using USB/serial media not TCP/IP as in previous example. I hope it makes more sense now :) Cheers, Tom

    Read the article

  • Recommendations for a free GIS library supporting raster images

    - by gspr
    Hi. I'm quite new to the whole field of GIS, and I'm about to make a small program that essentially overlays GPS tracks on a map together with some other annotations. I primarily need to allow scanned (thus raster) maps (although it would be nice to support proper map formats and something like OpenStreetmap in the long run). My first exploratory program uses Qt's graphics view framework and overlays the GPS points by simply projecting them onto the tangent plane to the WGS84 ellipsoid at a calibration point. This gives half-decent accuracy, and actually looks good. But then I started wondering. To get the accuracy I need (i.e. remove the "half" in "half-decent"), I have to correct for the map projection. While the math is not a problem in itself, supporting many map projection feels like needless work. Even though a few projections would probably be enough, I started thinking about just using something like the PROJ.4 library to do my projections. But then, why not take it all the way? Perhaps I might aswell use a full-blown map library such as Mapnik (edit: Quantum GIS also looks very nice), which will probably pay off when I start to want even more fancy annotations or some other symptom of featuritis. So, finally, to the question: What would you do? Would you use a full-blown map library? If so, which one? Again, it's important that it supports using (and zooming in and out with) raster maps and has pretty overlay features. Or would you just keep it simple, and go with Qt's own graphics view framework together with something like PROJ.4 to handle the map projections? I appreciate any feedback! Some technicalities: I'm writing in C++ with a Qt-based GUI, so I'd prefer something that plays relatively nicely with those. Also, the library must be free software (as in FOSS), and at least decently cross-platform (GNU/Linux, Windows and Mac, at least). Edit: OK, it seems I didn't do quite enough research before asking this question. Both Quantum GIS and Mapnik seem very well suited for my purpose. The former especially so since it's based on Qt.

    Read the article

  • Using a function with reference as a function with pointers?

    - by epatel
    Today I stumbled over a piece of code that looked horrifying to me. The pieces was chattered in different files, I have tried write the gist of it in a simple test case below. The code base is routinely scanned with FlexeLint on a daily basis, but this construct has been laying in the code since 2004. The thing is that a function implemented with a parameter passing using references is called as a function with a parameter passing using pointers...due to a function cast. The construct has worked since 2004 on Irix and now when porting it actually do work on Linux/gcc too. My question now. Is this a construct one can trust? I can understand if compiler constructors implement the reference passing as it was a pointer, but is it reliable? Are there hidden risks? Should I change the fref(..) to use pointers and risk braking anything in the process? What to you think? #include <iostream> using namespace std; // ---------------------------------------- // This will be passed as a reference in fref(..) struct string_struct { char str[256]; }; // ---------------------------------------- // Using pointer here! void fptr(const char *str) { cout << "fptr: " << str << endl; } // ---------------------------------------- // Using reference here! void fref(string_struct &str) { cout << "fref: " << str.str << endl; } // ---------------------------------------- // Cast to f(const char*) and call with pointer void ftest(void (*fin)()) { void (*fcall)(const char*) = (void(*)(const char*))fin; fcall("Hello!"); } // ---------------------------------------- // Let's go for a test int main() { ftest((void (*)())fptr); // test with fptr that's using pointer ftest((void (*)())fref); // test with fref that's using reference return 0; }

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • Naming selenium grid nodes. Spawning a specific node

    - by ???? ????
    I'm trying to implement a kind of default queues in selenium hub. There is a possibility to specify node's name (actually its environment, smth like "firefox on ubuntu" or "chrome on windows"). Selenium grid itself has a default queue, it works according to 'First In, First Out' principle. But I want to prioritize some of my tasks given to selenium server. I have no possibility to introduce custom queue (seems like there is no API for that), that's why I decided to separate queue's logic from selenium server. I'll only call a specific node with specific name (environment) for example "firefox important node" or smth like that. So, I want to know how to directly tell selenium which node to use for my task? And generally, am I thinking in a right way? Here are my configs: hubConfig.json.erb { "host": null, "port": <%= node[:selenium][:server][:port] %>, "newSessionWaitTimeout": -1, "servlets" : [], "prioritizer": null, "capabilityMatcher": "org.openqa.grid.internal.utils.DefaultCapabilityMatcher", "throwOnCapabilityNotPresent": true, "nodePolling": <%= node[:selenium][:server][:node_polling] %>, "cleanUpCycle": <%= node[:selenium][:server][:cleanup_cycle] %>, "timeout": <%= node[:selenium][:server][:timeout] %>, "browserTimeout": 0, "maxSession": <%= node[:selenium][:server][:max_session] %> } nodeConfig.json.erb { "capabilities": [ { "browserName": "firefox", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "chrome", "maxInstances": 5, "seleniumProtocol": "WebDriver" }, { "browserName": "phantomjs", "maxInstances": 5, "seleniumProtocol": "WebDriver" } ], "configuration": { "proxy": "org.openqa.grid.selenium.proxy.DefaultRemoteProxy", "maxSession": <%= node[:selenium][:node][:max_session] %>, "port": <%= node[:selenium][:node][:port] %>, "host": "<%= node[:fqdn] %>", "register": true, "registerCycle": <%= node[:selenium][:node][:register_cycle] %>, "hubPort": <%= node[:selenium][:server][:port] %> } } And my Driver class: ... def remote_driver @browser = Watir::Browser.new(:remote, :url => "http://myhub.com:4444/wd/hub", :http_client => client, :desired_capabilities => capabilities ) end def capabilities Selenium::WebDriver::Remote::Capabilities.send( "firefox", :javascript_enabled => true, :css_selectors_enabled => true, :takes_screenshot => true ) end def client client = Selenium::WebDriver::Remote::Http::Default.new client.timeout = 360 client end ... I still don't know how to use specified node for my task. If I try to start a driver adding :name => "firefox important node" and extend nodeConfig.json.erb's configuration with environments: - name: "firefox important node" browser: "*firefox" - name: "Firefox36 on Linux" browser: "*firefox" selenium just starts random firefox browser on a random node. How can I control it?

    Read the article

  • Windows 7 - pydoc from cmd

    - by Random_Person
    Okay, I'm having one of those moments that makes me question my ability to use a computer. This is not the sort of question I imagined asking as my first SO post, but here goes. Started on Zed's new "Learn Python the Hard Way" since I've been looking to get back into programming after a 10 year hiatus and python was always what I wanted. This book has really spoken to me. That being said, I'm having a serious issue with pydoc from the command. I've got all the directories in c:/python26 in my system path and I can execute pydoc from the command line just fine regardless of pwd - but it accepts no arguments. Doesn't matter what I type, I just get the standard pydoc output telling me the acceptable arguments. Any ideas? For what it's worth, I installed ActivePython as per Zed's suggestion. C:\Users\Chevee>pydoc file pydoc - the Python documentation tool pydoc.py <name> ... Show text documentation on something. <name> may be the name of a Python keyword, topic, function, module, or package, or a dotted reference to a class or function within a module or module in a package. If <name> contains a '\', it is used as the path to a Python source file to document. If name is 'keywords', 'topics', or 'modules', a listing of these things is displayed. pydoc.py -k <keyword> Search for a keyword in the synopsis lines of all available modules. pydoc.py -p <port> Start an HTTP server on the given port on the local machine. pydoc.py -g Pop up a graphical interface for finding and serving documentation. pydoc.py -w <name> ... Write out the HTML documentation for a module to a file in the current directory. If <name> contains a '\', it is treated as a filename; if it names a directory, documentation is written for all the contents. C:\Users\Chevee> EDIT: New information, pydoc works just fine in PowerShell. As a linux user, I have no idea why I'm trying to use cmd anyways--but I'd still love to figure out what's up with pydoc and cmd. EDIT 2: More new information. In cmd... c:\>python c:/python26/lib/pydoc.py file ...works just fine. Everything works just fine with just pydoc in PowerShell without me worrying about pwd, or extensions or paths.

    Read the article

  • unable to install anything on ubuntu 9.10 with aptitude

    - by Srisa
    Hello, Earlier i could install software by using the 'sudo aptitude install ' command. Today when i tried to install rkhunter i am getting errors. It is not just rkhunter, i am not able to install anything. Here is the text output: user@server:~$ sudo aptitude install rkhunter ................ ................ 20% [3 rkhunter 947/271kB 0%] Get:4 http://archive.ubuntu.com karmic/universe unhide 20080519-4 [832kB] 40% [4 unhide 2955/832kB 0%] 100% [Working] Fetched 1394kB in 1s (825kB/s) Preconfiguring packages ... Selecting previously deselected package lsof. (Reading database ... ................ (Reading database ... 95% (Reading database ... 100% (Reading database ... 20076 files and directories currently installed.) Unpacking lsof (from .../lsof_4.81.dfsg.1-1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb (--unpack): unable to create `/usr/bin/lsof.dpkg-new' (while processing `./usr/bin/lsof'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package libmd5-perl. Unpacking libmd5-perl (from .../libmd5-perl_2.03-1_all.deb) ... Selecting previously deselected package rkhunter. Unpacking rkhunter (from .../rkhunter_1.3.4-5_all.deb) ... dpkg: error processing /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb (--unpack): unable to create `/usr/bin/rkhunter.dpkg-new' (while processing `./usr/bin/rkhunter'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Selecting previously deselected package unhide. Unpacking unhide (from .../unhide_20080519-4_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/unhide_20080519-4_amd64.deb (--unpack): unable to create `/usr/sbin/unhide-posix.dpkg-new' (while processing `./usr/sbin/unhide-posix'): Permission denied dpkg-deb: subprocess paste killed by signal (Broken pipe) Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/lsof_4.81.dfsg.1-1_amd64.deb /var/cache/apt/archives/rkhunter_1.3.4-5_all.deb /var/cache/apt/archives/unhide_20080519-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: Setting up libmd5-perl (2.03-1) ... Building dependency tree... 0% Building dependency tree... 50% Building dependency tree... 50% Building dependency tree Reading state information... 0% ........... .................... I have removed some lines to reduce the text. All the error messages are in here though. My experience with linux is limited and i am not sure what the problem is or how it is to be resolved. Thanks.

    Read the article

  • Instagram API and Zend/Loader.php

    - by Jaemin Kim
    I want to use Instagram API and I found this code. <html> <head></head> <body> <h1>Popular on Instagram</h1> <?php // load Zend classes require_once 'Zend/Loader.php'; Zend_Loader::loadClass('Zend_Http_Client'); // define consumer key and secret // available from Instagram API console $CLIENT_ID = 'YOUR-CLIENT-ID'; $CLIENT_SECRET = 'YOUR-CLIENT-SECRET'; try { // initialize client $client = new Zend_Http_Client('https://api.instagram.com/v1/media/popular'); $client->setParameterGet('client_id', $CLIENT_ID); // get popular images // transmit request and decode response $response = $client->request(); $result = json_decode($response->getBody()); // display images $data = $result->data; if (count($data) > 0) { echo '<ul>'; foreach ($data as $item) { echo '<li style="display: inline-block; padding: 25px"><a href="' . $item->link . '"><img src="' . $item->images->thumbnail->url . '" /></a> <br/>'; echo 'By: <em>' . $item->user->username . '</em> <br/>'; echo 'Date: ' . date ('d M Y h:i:s', $item->created_time) . '<br/>'; echo $item->comments->count . ' comment(s). ' . $item->likes->count . ' likes. </li>'; } echo '</ul>'; } } catch (Exception $e) { echo 'ERROR: ' . $e->getMessage() . print_r($client); exit; } ?> </body> </html> In here, I found Zender/Load.php, and I've never heard about that. Is it okay to go http://www.zend.com/en/products/guard/downloads here, and download Zend for linux?? And for another question, is this code available to use Instagram API?? For last, could you let me know that is there any simple code to use Instagram API? Thank you.

    Read the article

  • Eclipse RCP and JFace: Problems with Images in Context menu and TreeViewer

    - by Juri
    I'm working on an Eclipse RCP application. Today I experienced some troubles when displaying images in the context menu. What I wanted to do is to add a column to my table containing images of stars for representing a user rating. On Windows, this causes some problems, since the star images are squeezed up on the left corner of the table cell instead of expanding on the whole cell, but I'll solve that somehow. In addition I have a context menu on the table, with an entry called "rate" where again the different stars from 1 to 5 (representing the rating level) are shown, such that the user can click on it for choosing different ratings. That works fine on Windows. Now I switched to Linux (Ubuntu) to see how it works out there, and strangely, the stars in the table cell are layed out perfectly, while the stars on the context menu don't even show up. On the context menu I'm using an action class where I'm setting the image descriptor for the star images: public class RateAction extends Action { private final int fRating; private IStructuredSelection fSelection; public RateAction(int rating, IStructuredSelection selection) { super("", AS_CHECK_BOX); fRating = rating; fSelection = selection; setImageDescriptor(createImageDescriptor()); } /** * Creates the correct ImageDescriptor depending on the given rating * @return */ private ImageDescriptor createImageDescriptor() { ImageDescriptor imgDescriptor = null; switch (fRating) { case 0: return OwlUI.NEWS_STARON_0; case 1: return OwlUI.NEWS_STARON_1; case 2: return OwlUI.NEWS_STARON_2; case 3: return OwlUI.NEWS_STARON_3; case 4: return OwlUI.NEWS_STARON_4; case 5: return OwlUI.NEWS_STARON_5; default: break; } return imgDescriptor; } /* * @see org.eclipse.jface.action.Action#getText() */ @Override public String getText() { //return no text, since the images of the stars will be displayed return ""; } ... } Does somebody know why this strange behaviour appears? Thanks a lot. (For some strange reason, the images don't appear. Here are the direct URLs: http://img187.imageshack.us/img187/4427/starsratingho4.png http://img514.imageshack.us/img514/8673/contextmenuproblemgt1.png) //Edit: I did some tries and it seems as if the images just don't appear when using a Checkbox style for the context menu (see constructor of the RateAction). When I switched to a PushButton style, the images appeared, although not correctly scaled, but at least they were shown.

    Read the article

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

  • The best way to separate admin functionality from a public site?

    - by AndrewO
    I'm working on a site that's grown both in terms of user-base and functionality to the point where it's becoming evident that some of the admin tasks should be separate from the public website. I was wondering what the best way to do this would be. For example, the site has a large social component to it, and a public sales interface. But at the same time, there's back office tasks, bulk upload processing, dashboards (with long running queries), and customer relations tools in the admin section that I would like to not be effected by spikes in public traffic (or effect the public-facing response time). The site is running on a fairly standard Rails/MySQL/Linux stack, but I think this is more of an architecture problem than an implementation one: mainly, how does one keep the data and business logic in sync between these different applications? Some strategies that I'm evaluating: 1) Create a slave database of the public facing database on another machine. Extract out all of the model and library code so that it can be shared between the applications. Create new controllers and views for the admin interfaces. I have limited experience with replication and am not even sure that it's supposed to be used this way (most of the time I've seen it, it's been for scaling out the read capabilities of the same application, rather than having multiple different ones). I'm also worried about the potential for latency issues if the slave is not on the same network. 2) Create new more task/department-specific applications and use a message oriented middleware to integrate them. I read Enterprise Integration Patterns awhile back and they seemed to advocate this for distributed systems. (Alternatively, in some cases the basic Rails-style RESTful API functionality might suffice.) But, I have nightmares about data synchronization issues and the massive re-architecting that this would entail. 3) Some mixture of the two. For example, the only public information necessary for some of the back office tasks is a read-only completion time or status. Would it make sense to have that on a completely separate system and send the data to public? Meanwhile, the user/group admin functionality would be run on a separate system sharing the database? The downside is, this seems to keep many of the concerns I have with the first two, especially the re-architecting. I'm sure the answers are going to be highly dependent on a site's specific needs, but I'd love to hear success (or failure) stories.

    Read the article

< Previous Page | 962 963 964 965 966 967 968 969 970 971 972 973  | Next Page >