Search Results

Search found 24324 results on 973 pages for 'google chrome devtools'.

Page 948/973 | < Previous Page | 944 945 946 947 948 949 950 951 952 953 954 955  | Next Page >

  • Uploading file to DropBox causing UnlinkedException

    - by Boardy
    I am currently working on android project and trying to enable DropBox functionality. I've selected the access type to App Only, and I can successfully authenticate and it creates an Apps directory and inside that creates a directory with the name of my app. When I then try to put a file in the DropBox directory it goes into the DropBoxException catch and in the logcat prints com.dropbox.client2.exception.DropboxUnlinkedException. I've done a google and from what I've seen this happens if the apps directory has been deleted and so authentication is required to be re-done, but this isn't the case, I have deleted it and I am putting the file straight after doing the authentication. Below is the code that retrieves the keys and stores the file on dropbox. AccessTokenPair tokens = getTokens(); UploadFile uploadFile = new UploadFile(context, common, this, mDBApi); uploadFile.execute(mDBApi); Below is the code for the getTokens method (don't think this would help but you never know) private AccessTokenPair getTokens() { AccessTokenPair tokens; SharedPreferences prefs = context.getSharedPreferences("prefs", 0); String key = prefs.getString("dropbox_key", ""); String secret = prefs.getString("dropbox_secret", ""); tokens = new AccessTokenPair(key, secret); return tokens; } Below is the class that extends the AsyncTask to perform the upload class UploadFile extends AsyncTask<DropboxAPI<AndroidAuthSession>, Void, bool> { Context context; Common common; Synchronisation sync; DropboxAPI<AndroidAuthSession> mDBApi; public UploadFile(Context context, Common common, Synchronisation sync, DropboxAPI<AndroidAuthSession> mDBApi) { this.context = context; this.common = common; this.sync = sync; this.mDBApi = mDBApi; } @Override protected bool doInBackground(DropboxAPI<AndroidAuthSession>... params) { try { File file = new File(Environment.getExternalStorageDirectory() + "/BoardiesPasswordManager/dropbox_sync.xml"); FileInputStream inputStream = new FileInputStream(file); Entry newEntry = mDBApi.putFile("android_sync.xml", inputStream, file.length(), null, null); common.showToastMessage("Successfully uploade Rev: " + newEntry.rev, Toast.LENGTH_LONG); } catch (IOException ex) { Log.e("DropBoxError", ex.toString()); } catch (DropboxException e) { Log.e("DropBoxError", e.toString()); e.printStackTrace(); } return null; } I have no idea why it would display the UnlinkedException so any help would be greatly appreciated.

    Read the article

  • iPhone Gps logging inaccurate

    - by Martijn
    I'm logging gps points during a walk. Below it shows the function that the coordinates are saved each 5 seconds. i Did several tests but i cannot get the right accuracy i want. (When testing the sky is clear also tests in google maps shows me that the gps signal is good). here is the code: -(void)viewDidAppear:(BOOL)animated{ if (self.locationManager == nil){ self.locationManager = [[[CLLocationManager alloc] init] autorelease]; locationManager.delegate = self; // only notify under 100 m accuracy locationManager.distanceFilter = 100.0f; locationManager.desiredAccuracy= kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; } } - start logging [NSTimer scheduledTimerWithTimeInterval:5 target:self selector:@selector(getData) userInfo:nil repeats:YES]; </code> <code> -(void)getData{ int distance; // re-use location. if ([ [NSString stringWithFormat:@"%1.2f",previousLat] isEqualToString:@"0.00"]){ // if previous location is not available, do nothing distance = 0; }else{ CLLocation *loc1 = [[CLLocation alloc] initWithLatitude:previousLat longitude:previousLong]; CLLocation *loc2 = [[CLLocation alloc] initWithLatitude:latGlobal longitude:longGlobal]; distance = [loc1 getDistanceFrom: loc2]; } // overwrite latGlobal with new variable previousLat = latGlobal; previousLong = longGlobal; // store location and save data to database // this part goes ok } - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { // track the time to get a new gps result (for gps indicator orb) lastPointTimestamp = [newLocation.timestamp copy]; // test that the horizontal accuracy does not indicate an invalid measurement if (newLocation.horizontalAccuracy < 0) return; // test the age of the location measurement to determine if the measurement is cached // don't rely on cached measurements NSTimeInterval locationAge = -[newLocation.timestamp timeIntervalSinceNow]; if (locationAge > 5.0) return; latGlobal = fabs(newLocation.coordinate.latitude); longGlobal= fabs(newLocation.coordinate.longitude); } I have taken a screenshot of the plot results (the walk takes 30 minutes) and an example of what i'am trying to acomplish: http://www.flickr.com/photos/21258341@N07/4623969014/ i really hope someone can put me in the right direction.

    Read the article

  • PHP - Cannot modify header information...

    - by Scott W.
    Hi, I am going crazy with this error: Cannot modify header information - headers already sent by... Please note that I know about the gazillion results on google and on stack overflow. My problem is the way I've constructed my pages. To keep html separate from php, I use include files. So, for example, my pages look something like this: <?php require_once('web.config.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Login</title> <link rel="shortcut icon" href="images/favicon.gif"/> <link rel="shortcut icon" href="images/favicon.ico"/> <link rel="stylesheet" type="text/css" href="<?php echo SITE_STYLE; ?>"/> </head> <body> <div id="page_effect" style="display:none;"> <?php require_once('./controls/login/login.control.php'); ?> </div> </body> </html> So, by the time my php file is included, the header is already sent. Part of the include file looks like this: // redirect to destination if($user_redirect != 'default') { $destination_url = $row['DestinationUrl']; header('Location:'.$user_redirect); } elseif($user_redirect == 'default' && isset($_GET['ReturnURL'])) { $destination_url = $_GET['ReturnURL']; header('Location:'.$destination_url); } else { header('Location:'.SITE_URL.'login.php'); } But I can't figure out how to work around this. I can't have the header redirect before the output so having output buffering on is the only thing I can do. Naturally it works fine that way - but having to rely on that just stinks. It would be nice if PHP had an alternative way to redirect or had additional parameters to tell it to clear the buffer.

    Read the article

  • Makefile issue with compiling a C++ program

    - by Steve
    I recently got MySQL compiled and working on Cygwin, and got a simple test example from online to verify that it worked. The test example compiled and ran successfully. However, when incorporating MySQL in a hobby project of mine it isn't compiling which I believe is due to how the Makefile is setup, I have no experience with Makefiles and after reading tutorials about them, I have a better grasp but still can't get it working correctly. When I try and compile my hobby project I recieve errors such as: Obj/Database.o:Database.cpp:(.text+0x492): undefined reference to `_mysql_insert_id' Obj/Database.o:Database.cpp:(.text+0x4c1): undefined reference to `_mysql_affected_rows' collect2: ld returned 1 exit status make[1]: *** [build] Error 1 make: *** [all] Error 2 Here is my Makefile, it worked with compiling and building the source before I attempted to put in MySQL support into the project. The LIBMYSQL paths are correct, verified by 'mysql_config'. COMPILER = g++ WARNING1 = -Wall -Werror -Wformat-security -Winline -Wshadow -Wpointer-arith WARNING2 = -Wcast-align -Wcast-qual -Wredundant-decls LIBMYSQL = -I/usr/local/include/mysql -L/usr/local/lib/mysql -lmysqlclient DEBUGGER = -g3 OPTIMISE = -O C_FLAGS = $(OPTIMISE) $(DEBUGGER) $(WARNING1) $(WARNING2) -export-dynamic $(LIBMYSQL) L_FLAGS = -lz -lm -lpthread -lcrypt $(LIBMYSQL) OBJ_DIR = Obj/ SRC_DIR = Source/ MUD_EXE = project MUD_DIR = TestP/ LOG_DIR = $(MUD_DIR)Files/Logs/ ECHOCMD = echo -e L_GREEN = \e[1;32m L_WHITE = \e[1;37m L_BLUE = \e[1;34m L_RED = \e[1;31m L_NRM = \e[0;00m DATE = `date +%d-%m-%Y` FILES = $(wildcard $(SRC_DIR)*.cpp) C_FILES = $(sort $(FILES)) O_FILES = $(patsubst $(SRC_DIR)%.cpp, $(OBJ_DIR)%.o, $(C_FILES)) all: @$(ECHOCMD) " Compiling $(L_RED)$(MUD_EXE)$(L_NRM)."; @$(MAKE) -s build build: $(O_FILES) @rm -f $(MUD_EXE) $(COMPILER) -o $(MUD_EXE) $(L_FLAGS) $(O_FILES) @echo " Finished Compiling $(MUD_EXE)."; @chmod g+w $(MUD_EXE) @chmod a+x $(MUD_EXE) @chmod g+w $(O_FILES) $(OBJ_DIR)%.o: $(SRC_DIR)%.cpp @echo " Compiling $@"; $(COMPILER) -c $(C_FLAGS) $< -o $@ .cpp.o: $(COMPILER) -c $(C_FLAGS) $< clean: @echo " Complete compile on $(MUD_EXE)."; @rm -f $(OBJ_DIR)*.o $(MUD_EXE) @$(MAKE) -s build I like the functionality of the Makefile, instead of spitting out all the arguments etc, it just spits out the "Compiling [Filename]" etc. If I add -c to the L_FLAGS then it compiles (I think) but instead spits out stuff like: g++: Obj/Database.o: linker input file unused because linking not done After a full day of trying and research on google, I'm no closer to solving my problem, so I come to you guys to see if you can explain to me why all this is happening and if possible, steps to solve. Regards, Steve

    Read the article

  • Strange problem publishing a fresh Ruby on Rails 3 application on localhost (Apache, Passenger and VirtualHosts)

    - by user502052
    I recently created a new Ruby on Rails 3 application locally on a Mac OS, named "test". Since I use apache2, in the private/etc/apache2/httpd.conf I set the VirtualHost for the "test" application: <VirtualHost *:443> ServerName test.pjtmain.localhost:443 DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:80> ServerName test.pjtmain.localhost DocumentRoot "/Users/<my_user_name>/Sites/test/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/test/public"> Order allow,deny Allow from all </Directory> </VirtualHost> Of course I restart apache2, but trying to access to http://test.pjtmain.localhost/ I have this error message from: FIREFOX Oops! Firefox could not find test.pjtmain.localhost Suggestions: * Search on Google: ... SAFARI Safari can’t find the server. Safari can’t open the page “http://test.pjtmain.localhost/” because Safari can’t find the server “test.pjtmain.localhost”. I have other RoR3 applications setted like that above in the httpd.conf file and all them work. What is the problem (maybe it is not related to apache...)? Notes: 1. Using the 'Network Uility' I did a Ping with the following result: ping: cannot resolve test.pjtmain.localhost: Unknown host and I did a Lookup with the follonwing result: ; <<>> DiG 9.6.0-APPLE-P2 <<>> test.pjtmain.localhost +multiline +nocomments +nocmd +noquestion +nostats +search ;; global options: +cmd <MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. 115 IN SOA dns1.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. dnsmaster.<MY_BROADBAND_TELECOMUNICATIONS_COMPANY_NAME>.com. ( 2010110500 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) 2. I am using Phusion Passenger 3. Since I not changed nothing to the new "test" application, I expect to see the default RoR index.html page: 4. It seems that in the 'Console Messages' there is any warning or error

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • Scalable Database Tagging Schema

    - by Longpoke
    EDIT: To people building tagging systems. Don't read this. It is not what you are looking for. I asked this when I wasn't aware that RDBMS all have their own optimization methods, just use a simple many to many scheme. I have a posting system that has millions of posts. Each post can have an infinite number of tags associated with it. Users can create tags which have notes, date created, owner, etc. A tag is almost like a post itself, because people can post notes about the tag. Each tag association has an owner and date, so we can see who added the tag and when. My question is how can I implement this? It has to be fast searching posts by tag, or tags by post. Also, users can add tags to posts by typing the name into a field, kind of like the google search bar, it has to fill in the rest of the tag name for you. I have 3 solutions at the moment, but not sure which is the best, or if there is a better way. Note that I'm not showing the layout of notes since it will be trivial once I get a proper solution for tags. Method 1. Linked list tagId in post points to a linked list in tag_assoc, the application must traverse the list until flink=0 post: id, content, ownerId, date, tagId, notesId tag_assoc: id, tagId, ownerId, flink tag: id, name, notesId Method 2. Denormalization tags is simply a VARCHAR or TEXT field containing a tab delimited array of tagId:ownerId. It cannot be a fixed size. post: id, content, ownerId, date, tags, notesId tag: id, name, notesId Method 3. Toxi (from: http://www.pui.ch/phred/archives/2005/04/tags-database-schemas.html, also same thing here: http://stackoverflow.com/questions/20856/how-do-you-recommend-implementing-tags-or-tagging) post: id, content, ownerId, date, notesId tag_assoc: ownerId, tagId, postId tag: id, name, notesId Method 3 raises the question, how fast will it be to iterate through every single row in tag_assoc? Methods 1 and 2 should be fast for returning tags by post, but for posts by tag, another lookup table must be made. The last thing I have to worry about is optimizing searching tags by name, I have not worked that out yet. I made an ASCII diagram here: http://pastebin.com/f1c4e0e53

    Read the article

  • RAD Visual Web Application Creator/ Builder/ Designer for PHP

    - by inhoue
    Hi all, I want to see if any of you know a (free and open source will be ideal) tool/ app that can help build a php web application very quickly without investing too much time on writing codes, preferring drag and drop/ point and click work-flow designer for logic design (see Agile from Outsystems below). Plus, visual designer for the business logic is great since it can help a developer visualize the logic better. There are a lot of GUI builders, form builders out there, but I am looking for one app for the entire web application development process. My goal is to find an application that a team of developers can use together and use the build-in code of the app as much as possible. E.g. the app will provide a modular just for handle user login or a shopping cart; a developer just need to drag and drop the modular to the logic designer and the code will be generated. This way the functionality will be in a module and code will always be standard across developers. So if a new developer get on-board, he will just need to use the system and get up and running quickly. To explain this better: there is a lot php frameworks, e.g. cakephp, CodeIgniter, etc which I can use to help coding, but still I need to create (code) the GUI, writing quite a bit of codes. I am looking for a tool/ app that is a little more high level than those frameworks. Here is 2 examples apps I found during my google search which they have visual logic designer and gui builder in one single app. Also a single click deployment (but I need it to be php apps or at least I can deploy the (php) code to a LAMP/ WAMP server): Wavemaker: for JAVA Agile from Outsystems: for JAVA or .net (This one is really good, with work-flow drag and drop logic designer!) Talend: it is just an ETL tool, but the concept is what I want to bring up. Drag and drop, point and click logic design. Custom code can be added if it is needed, but the drag and drop process already finished the structure and most of the coding of the web app one needs to build. I want to list Adobe Flex, but it is more like a GUI designer + IDE, not exactly what I want to describe here. The drag and drop/ work-flow logic designer is a key for the app. I could go for the CMS route by learning how to extend them, but it is not that flexible for me and is a long learning curve. Anybody came across this type of app before? Or any idea of how can I find those apps? I googled them for long time, I don't see any of them for php and just few (just 2) for Java. Thanks in advance!

    Read the article

  • How can I create a Base64-Encoded string from an GDI+ Image in C++?

    - by Schnapple
    I asked a question recently, How can I create an Image in GDI+ from a Base64-Encoded string in C++?, which got a response that led me to the answer. Now I need to do the opposite - I have an Image in GDI+ whose image data I need to turn into a Base64-Encoded string. Due to its nature, it's not straightforward. The crux of the issue is that an Image in GDI+ can save out its data to either a file or an IStream*. I don't want to save to a file, so I need to use the resulting stream. Problem is, this is where my knowledge breaks down. This first part is what I figured out in the other question // Initialize GDI+. GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); // I have this decode function from elsewhere std::string decodedImage = base64_decode(Base64EncodedImage); // Allocate the space for the stream DWORD imageSize = decodedImage.length(); HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize); LPVOID pImage = ::GlobalLock(hMem); memcpy(pImage, decodedImage.c_str(), imageSize); // Create the stream IStream* pStream = NULL; ::CreateStreamOnHGlobal(hMem, FALSE, &pStream); // Create the image from the stream Image image(pStream); // Cleanup pStream->Release(); GlobalUnlock(hMem); GlobalFree(hMem); (Base64 code) And now I'm going to perform an operation on the resulting image, in this case rotating it, and now I want the Base64-equivalent string when I'm done. // Perform operation (rotate) image.RotateFlip(Gdiplus::Rotate180FlipNone); IStream* oStream = NULL; CLSID tiffClsid; GetEncoderClsid(L"image/tiff", &tiffClsid); // Function defined elsewhere image.Save(oStream, &tiffClsid); // And here's where I'm stumped. (GetEncoderClsid) So what I wind up with at the end is an IStream* object. But here's where both my knowledge and Google break down for me. IStream shouldn't be an object itself, it's an interface for other types of streams. I'd go down the road from getting string-Image in reverse, but I don't know how to determine the size of the stream, which appears to be key to that route. How can I go from an IStream* to a string (which I will then Base64-Encode)? Or is there a much better way to go from a GDI+ Image to a string?

    Read the article

  • Incompatible library creating new project with Aptana

    - by Phil Rice
    I am a ruby and rails newbie, so my abilities to debug this are somewhat limited. I have just added the eclipse plugin which failed, then downloaded the latest aptana studio which also failed. The failure was the same in both cases. The nature of the failure is that when I create a new rails project, I get an error message about an incompatible library version "C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so". The project is actually created, along with directories and files. Google searches around this error message have only returned a couple of hits, which were not very helpful I am wondering if this is about 64 bit libraries. My software stack is: Windows 7 home premium 64bit Aptana RadRails, build: 2.0.5.1278709071 Ruby1.9.3 gem 1.8.24 The console shows: "4320" C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': iconv will be deprecated in the future, use String#encode instead. C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': incompatible library version - C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/http11.so (LoadError) from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5-x86-mswin32-60/lib/mongrel.rb:12:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:60:in `rescue in require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:35:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `block in require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:521:in `new_constants_in' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/activesupport-2.3.4/lib/active_support/dependencies.rb:156:in `require' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler/mongrel.rb:1:in `<top (required)>' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `const_get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `block in get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `each' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rack-1.0.0/lib/rack/handler.rb:17:in `get' from C:/Ruby193/lib/ruby/gems/1.9.1/gems/rails-2.3.4/lib/commands/server.rb:45:in `<top (required)>' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from C:/Ruby193/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from script/server:3:in `<top (required)>' from -e:2:in `load' from -e:2:in `<main>'

    Read the article

  • Looking for an Open Source Project in need of help

    - by hvidgaard
    Hi StackOverflow! I'm a CS student on well on my way to graduate. I have had a difficult time of finding relevant student jobs (they seems to be taken merely hours after the notice gets on the board) , so instead I'm looking for an open source project in need of help. I'm aware that I should choose one that I use, but I'm not aware of any OS-project that I use that needs help. That's why I'm asking you. I don't have any deep experience, but I here are some of my biggest projects so far: BitTorrent-ish client in Python (a subset of BitTorrent) HTTP 1.1 webserver in Java Compiler from a subset of Java to run on JRE Flash-framework project to model an iPad look and feel (not to run actual iPad programs) complete with an API for programs. Complete MySQL database for a booking system, with departure and arrival times, so you could only book valid tickets (with a Java frontend). I know, Java and languages like AS3 and C# feels natural per se, Python, and have done a fair bit of hacking around in C, but I don't feel very comfortable with it. Mostly I'm afraid to make a fuckup because I have such a high degree of control. I would like to think I'm well aware of good software design practices, but in reality what I do is ask myself "would I like to use/maintain this?", and I love to refactor my code because I see optimizations. I love algorithms and to make them run in the best possible time. I don't have any preferred domain to work in, but I wouldn't mind it to be graphics or math heavy. Ideally I'm looking for a project in C++ to learn the in's and out's of it, but I'm well aware that I don't know that language very well. I would like to have a mentor-like figure until I'm confident enough to stand on my own, not one to review all my code (I'm sure someone will to start with anyway), but to ask questions about the project and language in question. I do have a wife and two children, so don't expect me to put in 10+ hours every week. In return I can work on my own, I strive to program modular and maintainable code. Know how to read an API, use Google, StackOverflow and online resources in general. If you have any questions, shoot. I'm looking forward to your suggestions.

    Read the article

  • How to get a physics engine like Nape working?

    - by Glacius
    Introduction: I think Nape is a relatively new engine so some of you may not know it. It's supposedly faster than box2d and I like that there is decent documentation. Here's the site: http://code.google.com/p/nape/ I'm relatively new to programming. I am decent at AS3's basic functionality, but every time I try to implement some kind of engine or framework I can't even seem to get it to work. With Nape I feel I got a little further than before but I still got stuck. My problem: I'm using Adobe CS5, I managed to import the SWC file like described here. Next I tried to copy the source of one of the demo's like this one and get it to work but I keep getting errors. I made a new class file, copied the demo source to it, and tried to add it to the stage. My stage code basically looks like this: import flash.Boot; // these 2 lines are as described in the tutorial new Boot(); var demo = new Main(); // these 2 are me guessing what I'm supposed to do addChild(demo); Well, it seems the source code is not even being recognized by flash as a valid class file. I tried editing it, but even if I get it recognized (give a package name and add curly brackets) but I still get a bunch of errors. Is it psuedo code or something? What is going on? My goal: I can imagine I'm going about this the wrong way. So let me explain what I'm trying to achieve. I basically want to learn how to use the engine by starting from a simple basic example that I can edit and mess around with. If I can't even get a working example then I'm unable to learn anything. Preferably I don't want to start using something like FlashDevelop (as I'd have to learn how to use the program) but if it can't be helped then I can give it a try. Thank you.

    Read the article

  • Just a small problem regarding javscript BOM question

    - by caramel1991
    The question is this: Create a page with a number of links. Then write code that fires on the window onload event, displaying the href of each of the links on the page. And this is my solution <html> <body language="Javascript" onload="displayLink()"> <a href="http://www.google.com/">First link</a> <a href="http://www.yahoo.com/">Second link</a> <a href="http://www.msn.com/">Third link</a> <script type="text/javascript" language="Javascript"> function displayLink() { for(var i = 0;document.links[i];i++) { alert(document.links[i].href); } } </script> </body> </html> This is the answer provided by the book <html> <head> <script language=”JavaScript” type=”text/javascript”> function displayLinks() { var linksCounter; for (linksCounter = 0; linksCounter < document.links.length; linksCounter++) { alert(document.links[linksCounter].href); } } </script> </head> <body onload=”displayLinks()”> <A href=”link0.htm” >Link 0</A> <A href=”link1.htm”>Link 2</A> <A href=”link2.htm”>Link 2</A> </body> </html> Before I get into the javascript tutorial on how to check user browser version or model,I was using the same method as the example,by acessing the length property of the links array for the loop,but after I read through the tutorial,I find out that I can also use this alternative ways,by using the method that the test condition will evalute to true only if the document.links[i] return a valid value,so does my code is written using the valid method??If it's not,any comment regarding how to write a better code??Correct me if I'm wrong,I heard some of the people say "a good code is not evaluate solely on whether it works or not,but in terms of speed,the ability to comprehend the code,and could posssibly let others to understand the code easily".Is is true??

    Read the article

  • Using jQuery or javascript to render json into multi-column table

    - by Scott Yu - UX designer
    I am trying to render a JSON into a HTML table. But the difficulty is making it so it loops through JSON and renders multiple columns if necessary. For the example below, what I want is this: Result wanted Result Wanted <table> <tr><th>AppName</th><td>App 1</td><td>App 2</td></tr> <tr><th>Last Modified</th><td>10/1/2012</td><td></td></tr> <tr><th>App Logo</th><td>10/1/2012</td><td></td></tr> blahblah </table> <table> <tr><th>AppName</th><td>App 1</td></tr> blahblah </table> JSON Example "Records": [ { "AppName": "App 1", "LastModified": "10/1/2012, 9:30AM", "ShipTo_Name": "Dan North", "ShipTo_Address": "Dan North", "ShipTo_Terms": "Dan North", "ShipTo_DueDate": "Dan North", "Items 1": [ { "Item_Name": "Repairs", "Item_Description": "Repair Work" } ] }, { "AppName": "App 2", "AppLogo": "http://www.google.com/logo.png", "LastModified": "10/1/2012, 9:30AM", "BillTo_Name": "Steve North", "Items 1": [ { "Item_Name": "Repairs", "Item_Description": "Repair Work" } ] } ], "Records": [ { "AppName": "App 1", "LastModified": "10/1/2012, 9:30AM", "ShipTo_Name": "222", "ShipTo_Address": "333 ", "ShipTo_Terms": "444", "ShipTo_DueDate": "5555", "Items 1": [ { "Item_Name": "Repairs", "Item_Description": "Repair Work" } ] } ], Code I am using now function CreateComparisonTable (arr,level,k) { var dumped_text = ""; if(!level) level = 0; //The padding given at the beginning of the line. var level_padding = ""; for(var j=0;j<level+1;j++) level_padding = "--"; if(typeof(arr) == 'object') { //Array/Hashes/Objects for (var item in arr) { var value = arr[item]; if (typeof(value) == 'object') { //If it is an array, if(item !=0) { dumped_text += '<tr><td>' + item + '<br>'; dumped_text += CreateComparisonTable(value,level+1); dumped_text += '</td></tr>'; } else { dumped_text += CreateComparisonTable(value,level, value.length); } } else { dumped_text += '<tr><td>' + level_padding + item + '</td><td>' + value + '</td></tr>'; } } } return dumped_text; } Jsfiddle here

    Read the article

  • How to salvage SQL server 2008 query from KILLED/ROLLBACK state without waiting half a day?

    - by littlegreen
    I have a stored procedure that inserts batches of millions of rows, emerging from a certain query, into an SQL database. It has one parameter selecting the batch; when this parameter is omitted, it will gather a list of batches and recursively call itself, in order to iterate over batches. In (pseudo-)code, it looks something like this: CREATE PROCEDURE spProcedure AS BEGIN IF @code = 0 BEGIN ... WHILE @@Fetch_Status=0 BEGIN EXEC spProcedure @code FETCH NEXT ... INTO @code END END ELSE BEGIN -- Disable indexes ... INSERT INTO table SELECT (...) -- Enable indexes ... Now it can happen that this procedure is slow, for whatever reason: it can't get a lock, one of the indexes it uses is misdefined or disabled. In that case, I want to be able kill the procedure, truncate and recreate the resulting table, and try again. However, when I try and kill the procedure, the process frequently oozes into a KILLED/ROLLBACK state from which there seems to be no return. From Google I have learned to do an sp_lock, find the spid, and then kill it with KILL <spid>. But when I try to kill it, it tells me SPID 75: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 554 seconds. I did find a forum message hinting that another spid should be killed before the other one can start a rollback. But that didn't work for me either, plus I do not understand, why that would be the case... could it be because I am recursively calling my own stored procedure? (But it should be having the same spid, right?) In any case, my process is just sitting there, being dead, not responding to kills, and locking the table. This is very frustrating, as I want to go on developing my queries, not waiting hours on my server sitting dead while pretending to be finishing a supposed rollback. Is there some way in which I can tell the server not to store any rollback information for my query? Or not to allow any other queries to interfere with the rollback, so that it will not take so long? Or how to rewrite my query in a better way, or how kill the process successfully without restarting the server?

    Read the article

  • Hibernate Communications Link Failure in Restlet-Hibernate Based Java application powered by MySQL

    - by Vatsala
    Let me describe my question - I have a Java application - Hibernate as the DB interfacing layer over MySQL. I get the communications link failure error in my application. The occurence of this error is a very specific case. I get this error , When I leave mysql server unattended for more than approximately 6 hours (i.e. when there are no queries issued to MySQL for more than approximately 6 hours). I am pasting a top 'exception' level description below, and adding a pastebin link for a detailed stacktrace description. javax.persistence.PersistenceException: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: org.hibernate.exception.JDBCConnectionException: Cannot open connection - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure - The last packet successfully received from the server was 1,274,868,181,212 milliseconds ago. The last packet sent successfully to the server was 0 milliseconds ago. - Caused by: java.net.ConnectException: Connection refused: connect the link to the pastebin for further investigation - http://pastebin.com/4KujAmgD What I understand from these exception statements is that MySQL is refusing to take in any connections after a period of idle/nil activity. I have been reading up a bit about this via google search, and came to know that one of the possible ways to overcome this is to set values for c3p0 properties as c3p0 comes bundled with Hibernate. Specifically, I read from here http://www.mchange.com/projects/c3p0/index.html that setting two properties idleConnectionTestPeriod and preferredTestQuery will solve this for me. But these values dont seem to have had an effect. Is this the correct approach to fixing this? If not, what is the right way to get over this? The following are related Communications Link Failure questions at stackoverflow.com, but I've not found a satisfactory answer in their answers. http://stackoverflow.com/questions/2121829/java-db-communications-link-failure http://stackoverflow.com/questions/298988/how-to-handle-communication-link-failure Note 1 - i dont get this error when I am using my application continuosly. Note 2 - I use JPA with Hibernate and hence my hibernate.dialect,etc hibernate properties reside within the persistence.xml in the META-INF folder (does that prevent the c3p0 properties from working?)

    Read the article

  • How do I call the methods in a model via controller? Zend Framework

    - by Joel
    Hi guys, I've been searching for tutorials to better understand this, but I'm having no luck. Please forgive the lengthy explination, but I want make sure I explain myself well. First, I'm quite new to the MVC structure, though I have been doing tutorials and learning as best I can. I have been moving over a live site into the Zend Framework model. So far, I have all the views within views/scripts/index/example.phtml. So therefore I'm using one IndexController and I have the code in each Action method for each page: IE public function exampleAction() Because I didn't know how to interact with a model, I put all the methods at the bottom of the controller (a fat controller). So basically, I had a working site by using a View and Controller and no model. ... Now I'm trying to learn how to incorporate the Model. So I created a View at: view/scripts/calendar/index.phtml I created a new Controller at: controller/CalendarControllers.php and a new model at: model/Calendar.php The problem is I think I'm not correctly communication with the model (I'm still new to OOP). Can you look over my controller and model and tell me if you see a problem. I'm needing to return an array from runCalendarScript(), but I'm not sure if I can return an array into the object like I'm trying to? I don't really understand how to "run" the runCalendarScript() from the controller? Thanks for any help! I'm stripping out most of the guts of the methods for the sake of brevity: controller: <?php class CalendarController extends Zend_Controller_Action { public function indexAction() { $finishedFeedArray = new Application_Model_Calendar(); $this->view->googleArray = $finishedFeedArray; } } model: <?php class Application_Model_Calendar { public function _runCalendarScript(){ $gcal = $this->_validateCalendarConnection(); $uncleanedFeedArray = $this->_getCalendarFeed($gcal); $finishedFeedArray = $this->_cleanFeed($uncleanedFeedArray); return $finishedFeedArray; } //Validate Google Calendar connection public function _validateCalendarConnection() { ... return $gcal; } //extracts googles calendar object into the $feed object public function _getCalendarFeed($gcal) { ... return $feed; } //cleans the feed to just text, etc protected function _cleanFeed($uncleanedFeedArray) { $contentText = $this->_cleanupText($event); $eventData = $this->_filterEventDetails($contentText); return $cleanedArray; } //Cleans up all formatting of text from Calendar feed public function _cleanupText($event) { ... return $contentText; } //filterEventDetails protected function _filterEventDetails($contentText) { ... return $data; } }

    Read the article

  • To Interface or Not?: Creating a polymorphic model relationship in Ruby on Rails dynamically..

    - by Globalkeith
    Please bear with me for a moment as I try to explain exactly what I would like to achieve. In my Ruby on Rails application I have a model called Page. It represents a web page. I would like to enable the user to arbitrarily attach components to the page. Some examples of "components" would be Picture, PictureCollection, Video, VideoCollection, Background, Audio, Form, Comments. Currently I have a direct relationship between Page and Picture like this: class Page < ActiveRecord::Base has_many :pictures, :as => :imageable, :dependent => :destroy end class Picture < ActiveRecord::Base belongs_to :imageable, :polymorphic => true end This relationship enables the user to associate an arbitrary number of Pictures to the page. Now if I want to provide multiple collections i would need an additional model: class PictureCollection < ActiveRecord::Base belongs_to :collectionable, :polymorphic => true has_many :pictures, :as => :imageable, :dependent => :destroy end And alter Page to reference the new model: class Page < ActiveRecord::Base has_many :picture_collections, :as => :collectionable, :dependent => :destroy end Now it would be possible for the user to add any number of image collections to the page. However this is still very static in term of the :picture_collections reference in the Page model. If I add another "component", for example :video_collections, I would need to declare another reference in page for that component type. So my question is this: Do I need to add a new reference for each component type, or is there some other way? In Actionscript/Java I would declare an interface Component and make all components implement that interface, then I could just have a single attribute :components which contains all of the dynamically associated model objects. This is Rails, and I'm sure there is a great way to achieve this, but its a tricky one to Google. Perhaps you good people have some wise suggestions. Thanks in advance for taking the time to read and answer this.

    Read the article

  • Javascript: Writing a firefox extension with sockets

    - by Johnny Grass
    I need to write a firefox extension that creates a server socket (I think that's what it's called) and returns the browser's current url when a client application (running on the same computer) sends it a request. The thing is that I have no Java/Javascript background at all and I'm pressed for time so I am trying to hack something together from code samples. So far I've been mildly successful. I've been working with code from this question which is used in the open source Firefox exension PolyChrome I have the following code: var reader = { onInputStreamReady : function(input) { var input_stream = Components.classes["@mozilla.org/scriptableinputstream;1"] .createInstance(Components.interfaces.nsIScriptableInputStream); input_stream.init(input); input_stream.available(); var request = ''; while (input_stream.available()) { request = request + input_stream.read(512); } var checkString = "foo" if (request.toString() == checkString.toString()) { output_console('URL: ' + content.location.href); } else output_console("nothing"); var thread_manager = Components.classes["@mozilla.org/thread-manager;1"].getService(); input.asyncWait(reader,0,0,thread_manager.mainThread); } } var listener = { onSocketAccepted: function(serverSocket, clientSocket) { output_console("Accepted connection on "+clientSocket.host+":"+clientSocket.port); input = clientSocket.openInputStream(0, 0, 0).QueryInterface(Components.interfaces.nsIAsyncInputStream); output = clientSocket.openOutputStream(Components.interfaces.nsITransport.OPEN_BLOCKING, 0, 0); var thread_manager = Components.classes["@mozilla.org/thread-manager;1"].getService(); input.asyncWait(reader,0,0,thread_manager.mainThread); } } var serverSocket = Components.classes["@mozilla.org/network/server-socket;1"]. createInstance(Components.interfaces.nsIServerSocket); serverSocket.init(9999, true, 5); output_console("Opened socket on " + serverSocket.port); serverSocket.asyncListen(listener); I have a few questions. So far I can telnet into localhost and get a response, but my string comparison in the reader seems to fail even if I enter "foo". I don't get why. What am I missing? The sample code I'm using opens up a console window and prints output when I telnet into localhost. Ideally I would like the output to be returned as a response when the client sends a request to the server socket with a passphrase. How do I go about doing that? Is doing this a good idea? Does it create security vulnerabilities on the computer? How can I block connections to the socket from other computers? What is a good place to read about javascript sockets? My google searches have been pretty fruitless but then maybe I'm not using the right keywords.

    Read the article

  • General workflow to allow multiple OpenIDs to be associated with one app account

    - by BobTodd
    I have a (typical?) scenario: that my app's users can use multiple openids mapped to one app account (like stackoverflow). For me the unique thing on the account is the email address, so this binds openids to the profile. Question is, how to allow a user to start using a second openid once one is setup. I am asking as I have read that it is a security hole to allow automatic account openid syncing simply based on the provider-supplied email address as someone could easily spoof someone's email address to create a spoof openid and falsely access the account (how I am not sure) - although this seems to be exactly how stack operates. See options a. and b. below. Problem for me with a. is what happens if the original openid no longer works for whatever reason - how would you set-up a new openid? Would b. be more acceptable if we used email verification? Does anyone have an article detailing a "standard" way (set of user stories) for this - it seems to be an increasingly popular way to authenticate. I have tried to detail this in a rough decision tree... 1. My Site > authentication landing page - user chooses an openid (facebook, google, myopenid etc), redirection > 2. Provider site returns with token (includes user registering a new openid, logging in or is already logged in to Provider site) 3. My Site > use token id to lookup user 3.1 Profile exists? Yes > authenticate. ends. No > 3.1.1 was email address supplied by provider? Yes > lookup user by email address 3.1.1.1 Profile exists? Yes > a. error message - please login with existing openid and associate this openid (from special page) Yes > b. or associate this openid with existing profile automatically. authenticate. ends. No > Register profile. With registration email address follow 3.1.1, except this time where email is unique, we will associate openid. ends

    Read the article

  • Ping remote server and wait to get data

    - by infinity
    Hi I'm building my first application for android and I've reached a point where I can't find a solution even have no idea what to search for in Google. So the problem: I am pinging a remote server with GET request through the application passing some parameters like file_id. Then the server gives back confirmation if the file exists or error otherwise, both in plain text. The error string is $$$ERROR$$$. Actually the confirmation is JSON string that holds the path to the file. If the file doesn't exists on the server it generated the error message and start downloading the file and processing it which normally takes 10-30 seconds. What would be the best way to check if the file is ready for download? I have DownloadFile class that extends AsyncTask but before I reach the point to download the file I need the URL which is dependant on the previous request which is in the main class in the UI thread. Here is some code: public class MainActivity extends Activity { private String getInfo() { // Create a new HttpClient and Post Header HttpClient httpClient = new DefaultHttpClient(); HttpGet httpPost = new HttpGet(infoUrl); StringBuilder sb = null; String data; JSONObject jObject = null; try { HttpResponse response = httpClient.execute(httpPost); // This might be equal "$$$ERROR$$$" if no file exists sb = inputStreamToString(response.getEntity().getContent()); } catch(ClientProtocolException e) { // TODO Auto-generated catch block Log.v("Error: pushItem ClientProtocolException: ", e.toString()); } catch (IOException e) { // TODO Auto-generated catch block Log.v("Error: pushItem IOException: ", e.toString()); } // Clean the data to be complaint JSON format data = sb.toString().replace("info = ", ""); try { jObject = new JSONObject(data); data = jObject.getString("h"); fileTitle = jObject.getString("title"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } downloadUrl = String.format(downloadUrl, fileId, data); return downloadUrl; } } So my idea was to get the content and if equal to $$$ERROR$$$ go into loop until JSON data is passed but I guess there is better solution. Note: I don't have control over the server output so have to deal with what I have.

    Read the article

  • How do I detect server status in a port scanner java implementation

    - by akz
    I am writing a port scanner in Java and I want to be able to distinct the following 4 use cases: port is open port is open and server banner was read port is closed server is not live I have the following code: InetAddress address = InetAddress.getByName("google.com"); int[] ports = new int[]{21, 22, 23, 80, 443}; for (int i = 0; i < ports.length; i++) { int port = ports[i]; Socket socket = null; try { socket = new Socket(address, port); socket.setSoTimeout(500); System.out.println("port " + port + " open"); BufferedReader reader = new BufferedReader( new InputStreamReader(socket.getInputStream())); String line = reader.readLine(); if (line != null) { System.out.println(line); } socket.close(); } catch (SocketTimeoutException ex) { // port was open but nothing was read from input stream ex.printStackTrace(); } catch (ConnectException ex) { // port is closed ex.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if (socket != null && !socket.isClosed()) { try { socket.close(); } catch (Exception e) { e.printStackTrace(); } } } } The problem is that I get a ConnectionException both when the port is closed and the server cannot be reached but with a different exception message: java.net.ConnectException: Connection timed out: connect when the connection was never established and java.net.ConnectException: Connection refused: connect when the port was closed so I cannot make the distinction between the two use cases without digging into the actual exception message. Same thing happens when I try a different approach for the socket creation. If I use: socket = new Socket(); socket.setSoTimeout(500); socket.connect(new InetSocketAddress(address, port), 1000); I have the same problem but with the SocketTimeoutException instead. I get a java.net.SocketTimeoutException: Read timed out if port was open but there was no banner to be read and java.net.SocketTimeoutException: connect timed out if server is not live or port is closed. Any ideas? Thanks in advance!

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • How to design service that can provide interface as JAX-WS web service, or via JMS, or as local meth

    - by kevinegham
    Using a typical JEE framework, how do I develop and deploy a service that can be called as a web service (with a WSDL interface), be invoked via JMS messages, or called directly from another service in the same container? Here's some more context: Currently I am responsible for a service (let's call it Service X) with the following properties: Interface definition is a human readable document kept up-to-date manually. Accepts HTTP form-encoded requests to a single URL. Sends plain old XML responses (no schema). Uses Apache to accept requests + a proprietary application server (not servlet or EJB based) containing all logic which runs in a seperate tier. Makes heavy use of a relational database. Called both by internal applications written in a variety of languages and also by a small number of third-parties. I want to (or at least, have been told to!): Switch to a well-known (pref. open source) JEE stack such as JBoss, Glassfish, etc. Split Service X into Service A and Service B so that we can take Service B down for maintenance without affecting Service A. Note that Service B will depend on (i.e. need to make requests to) Service A. Make both services easier for third parties to integrate with by providing at least a WS-I style interface (WSDL + SOAP + XML + HTTP) and probably a JMS interface too. In future we might consider a more lightweight API too (REST + JSON? Google Protocol Buffers?) but that's a nice to have. Additional consideration are: On a smaller deployment, Service A and Service B will likely to running on the same machine and it would seem rather silly for them to use HTTP or a message bus to communicate; better if they could run in the same container and make method calls to each other. Backwards compatibility with the existing ad-hoc Service X interface is not required, and we're not planning on re-using too much of the existing code for the new services. I'm happy with either contract-first (WSDL I guess) or (annotated) code-first development. Apologies if my terminology is a bit hazy - I'm pretty experienced with Java and web programming in general, but am finding it quite hard to get up to speed with all this enterprise / SOA stuff - it seems I have a lot to learn! I'm also not very used to using a framework rather than simply writing code that calls some packages to do things. I've got as far as downloading Glassfish, knocking up a simple WSDL file and using wsimport + a little dummy code to turn that into a WAR file which I've deployed.

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

< Previous Page | 944 945 946 947 948 949 950 951 952 953 954 955  | Next Page >