Search Results

Search found 18592 results on 744 pages for 'basic unix commands'.

Page 607/744 | < Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >

  • Reasons for Ajax navigation breaking on Coldfusion/Apache when running an app in iOS fullscreen mode? [closed]

    - by frequent
    Not sure if this belongs to SO or here. I'm running a webApp using jquery, jquerymobile, requireJS and apache, coldfusion8, mysql 5.0.88 serverside. The app works fine until I try to run it in fullscreen mode on iOS (add icon to homescreen, launch app from there with <meta name="apple-mobile-web-app-capable" content="yes" /> specified). This meta tag will break the Jquery Mobile AJAX navigation. The AJAX request will fail and the requested page will be loaded as a new page, thereby restarting the app on every page change. I have chased this through the whole front end starting from requireJS through Jquery Mobiles AJAX navigation down to the AJAX request being made in Jquery. xhr.send( ( s.hasContent && s.data ) || null ); In regular browser this works no problem. In fullscreen mode, this fails (readystate=0, empty response). I have found this article, which argues that fullscreen mode is like a browser instance with different HTTP strings. On ASP.net this results in the browser not being identified by the server and only basic browser settings being assumed (e.g. no Javascript). I'm a little lost where to start looking for possible reasons serverside. I have not written any server code for handling Ajax page navigation, so this must be something that is handled out of the box by Coldfusion or Apache? Question: Where could I start looking for problable causes of fullscreen mode breaking AJAX navigation if I assume Coldfusion or Apache are the culprits? Is there a setting I'm missing in httpd.config? What else could be the problem? Thanks for inputs!

    Read the article

  • Display on secondary video card (Nvidia 8400 GS): horrible refresh, bogs system

    - by minameismud
    This is my work computer, but it's a small shop. We do business software development. The most hardcore thing we create is some web animations with html5 and fancy javascript/css. The base machine is a Dell Precision T3500 - Xeon W3550 (3.07GHz quad), 6GB ram, pair of 500GB harddrives, and Win 7 x64 Enterprise SP1. My primary video card is an ATI FirePro V4800 1GB in a PCIe slot of some speed driving a pair of 23s at 1920x1080 through DisplayPort-HDMI adapters. The secondary card is an NVidia GeForce 8400GS in a PCI slot driving a single 17" at 1280x1024 through DVI. On either of the 23" monitors, windows move smoothly, scroll quickly, and are generally very responsive. On the 17", it's slow, chunky, and when I'm trying to scroll a ton of content, Windows will occasionally suggest I drop to the Windows Basic theme. I've updated drivers for both cards, and I've gotten every Windows update relating to video. Specifically: ATI FirePro Provider: Advanced Micro Devices, Inc Date: 6/22/2014 Version: 13.352.1014.0 NVidia 8400 GS Provider: NVIDIA Date: 7/2/2014 Version: 9.18.13.4052 Unfortunately, new hardware isn't really an option. Is there anything I can do software-wise to speed up the NVidia-driven monitor?

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • Slow Routing Over LAN (Wired)

    - by reverendj1
    I'm having issues with my router acting very slow (Adtran Netvanta 3458). We have two networks, let's call them A and B. When I run netperf from two servers on network A (no routing) I get speeds along the lines of 900 Mbps. Which makes sense, since we have all 1Gbps switches. When testing A to B (or vice-versa) I get speeds along the lines of 22Mbps. I have also tested connecting my laptop to the switchports on the router, and testing two servers on network A (no routing) and got speeds around 90 Mbps. Which makes sense since the switchports on the router are 100Mbps. Does anyone have any idea why routing would be so slow? We bought the router over a year ago, and we think it has been doing this since then, but we never actually tested it before. (network B isn't really used much, so we didn't notice much) We were implementing a site-to-site VPN and noticed it was ridiculously slow, so we started testing basic routing performance. I have ruled out cabling and router CPU/memory utilization. Adtran looked at my config, but didn't see anything wrong with it.

    Read the article

  • NginxHttpAuthBasicModule with Sinatra & Passenger

    - by scainey
    Hi, I'm serving static pages from a Sinatra application using Nginx. I've implemented Basic Authentication for one page on the site using NginxHttpAuthBasicModule, the authentication succeeds but Nginx doesn't resolve the link. Error log gives - 2010/03/22 12:15:19 [error] 7143#0: *2902 open() "/home/me/live/mysite_home/public /mypage" failed (2: No such file or directory), client: 82.71.18.122, server: mysite.com, request: "GET /mypage HTTP/1.1", host: "mysite.com" The actual file is found at: /home/me/live/mysite_home/live/mypage.erb The configuration file is: server { listen 80; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } server { listen 443; server_name mysite.com; root /home/me/live/mysite_home/public; passenger_enabled on; ssl on; ssl_certificate /etc/nginx/conf/certs/server.crt; ssl_certificate_key /etc/nginx/conf/certs/server.key; keepalive_timeout 70; location /mypage { auth_basic "Restricted"; auth_basic_user_file htpasswd; } } Not sure if this is a Sinatra, Passenger or Nginx thing, or if I'm just missing something.

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • CNAME vs A records

    - by deb
    I built a small rails app that allow users to make a simple site. It uses subdomain accounts ex: deb.myapp.com Whenever an user wanted to have a domain name associated with their site, they would change their NS records to point to slicehost where the application is hosted and I would manage the DNS records myself. However, as more people are using the application this is not an option for me anymore. I prefer users to keep their nameservers at goddady, register.com, etc, so they can log in and manage their own MX records or whatever else they need to change. My question is, should I have them change the A records to point to my server's ip, or should I have them create a CNAME record? Do they need to delete the default A records to allow the CNAME record to work? Will the A record take precedence and overrule the CNAME record? Thanks in advance. Sorry if this is a very basic question. I've read other posts and I can't find a definite answer.

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Flat File Connection Manager in SSIS package shows "Valid File Name Must be Selected"

    - by Traples
    (Flat File Location) Samba Share | Windows Share (SSIS) _______________________________ | | XP 32bit | Works | Works | | 2003 Serv 32bit | Works | Works | | Vista 64bit | ERROR | Works | | Win 7 64bit | ERROR | Works | | 2008 Serv 64bit | ERROR | Works I created an SSIS package in VS 2008 that parses a flat file from a shared folder and puts the records into a SQL Server db. I recently installed Windows 7 and VS 2008 on a new workstation. When I import the package from TFS and open it, I get the error Validation error. Parse and Import Catalog Flat File: MySSISPackage: The file name "\\shared\flatfile.txt" specified in the connection was not valid. When I open the Flat File Connection Manager Editor, there is an error stating: A valid file name must be selected I can browse to and select the file from inside the editor, but I cannot change any properties, or move away from the General tab because of this error. If I go back to my laptop (Windows XP), where the package was first created, there is no error. Both workstations are on the same domain, and I'm logging in using the same credentials. Any ideas as to why I would receive this error from one workstation and not another? UPDATE: If I take the .dtsx package from the running workstation and load it into SSIS on the server, I get the following errors when it tries to run: Error: The file name "\\shared\flatfile.txt" specified in the connection was not valid. and... Error: Connection "MySSISPackage" failed validation. and... Error: The file name property is not valid. The file name is a device or contains invalid characters. UPDATE 2: a) The Shared folder I'm trying to pull the flat file from is a Samba share on a Unix box. b) If I access the file using SSIS on any 64-bit platform (Windows 7 64-bit, Vista 64-bit, Windows Server 2008) I get the error "A valid file name must be selected." c) Accessing the file using SSIS from 32-bit environments (Windows XP 32-bit, Windows Server 2003 32-bit) there is no problem. d) If I move the file to a shared folder on a Windows server, 64-bit SSIS recognizes the file.

    Read the article

  • CURL login by script to a Joomla website

    - by Paryeshakaya
    Hello, I need to login by a PHP script that uses CURL to a Joomla website, in order to access to a private page that needs to be processed, but nothwithstanding several attempts I have done, I have always a 403 error. I have done a similar thing with other websites and it worked. Script I use: $uname = "id"; $upswd = "pswd"; $url = "http://www.somewebpage.com"; $agent = "'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.6) Gecko/20060728 Firefox/1.5.0.6'"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url ); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE ); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE ); curl_setopt($ch, CURLOPT_COOKIEJAR, './cookie.txt'); curl_setopt($ch, CURLOPT_COOKIEFILE, './cookie.txt'); curl_setopt($ch, CURLOPT_USERAGENT, $agent ); curl_setopt($ch, CURLOPT_HEADER, TRUE ); curl_setopt($ch, CURLOPT_REFERER, $url1); // POST fields $postfields = array(); $postfields['username'] = urlencode($uname); $postfields['passwd'] = urlencode($upswd); $postfields['remember'] = 'yes'; $postfields['option'] = 'login'; $postfields['op2'] = 'login'; $postfields['return'] = urlencode($url); $postfields['message'] = '0'; $postfields['force_session'] = '1'; $postfields['j3b00d36f4336137f4f03335c5eee6440'] = '1'; curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $postfields); $ret = curl_exec($ch); The username and password, as well as the website URL I am using are perfectly working. This is the result I get: HTTP/1.1 100 Continue HTTP/1.1 403 Forbidden Date: Sat, 06 Feb 2010 08:29:36 GMT Server: Apache/2.2.13 (Unix) mod_ssl/2.2.13 OpenSSL/0.9.7a DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 X-Powered-By: PHP/5.2.10 Connection: close Transfer-Encoding: chunked Content-Type: text/html Is there something wrong in my CURL request? Or is there a limit with Joomla remote login? Thanks for suggestions!

    Read the article

  • RSA decrypting data in C# (.NET 3.5) which was encrypted with openssl in php 5.3.2

    - by panny
    Maybe someone can clear me up. I have been surfing on this a while now. Step #1: Create a root certificate Key generation on unix 1) openssl req -x509 -nodes -days 3650 -newkey rsa:1024 -keyout privatekey.pem -out mycert.pem 2) openssl rsa -in privatekey.pem -pubout -out publickey.pem 3) openssl pkcs12 -export -out mycertprivatekey.pfx -in mycert.pem -inkey privatekey.pem -name "my certificate" Step #2: Does root certificate work on php: YES PHP side I used the publickey.pem to read it into php: $publicKey = "file://C:/publickey.pem"; $privateKey = "file://C:/privatekey.pem"; $plaintext = "123"; openssl_public_encrypt($plaintext, $encrypted, $publicKey); $transfer = base64_encode($encrypted); openssl_private_decrypt($encrypted, $decrypted, $privateKey); echo $decrypted; // "123" OR $server_public_key = openssl_pkey_get_public(file_get_contents("C:\publickey.pem")); // rsa encrypt openssl_public_encrypt("123", $encrypted, $server_public_key); and the privatekey.pem to check if it works: openssl_private_decrypt($encrypted, $decrypted, openssl_get_privatekey(file_get_contents("C:\privatekey.pem"))); echo $decrypted; // "123" Coming to the conclusion, that encryption/decryption works fine on the php side with these openssl root certificate files. Step #3: Does root certificate work on .NET: YES C# side In same manner I read the keys into a .net C# console program: X509Certificate2 myCert2 = new X509Certificate2(); RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(); try { myCert2 = new X509Certificate2(@"C:\mycertprivatekey.pfx"); rsa = (RSACryptoServiceProvider)myCert2.PrivateKey; } catch (Exception e) { } byte[] test = {Convert.ToByte("123")}; string t = Convert.ToString(rsa.Decrypt(rsa.Encrypt(test, false), false)); Coming to the point, that encryption/decryption works fine on the c# side with these openssl root certificate files. Step #4: Enrypt in php and Decrypt in .NET: !!NO!! PHP side $onett = "123" .... openssl_public_encrypt($onett, $encrypted, $server_public_key); $onettbase64 = base64_encode($encrypted); copy - paste $onettbase64 ("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw=") into c# program: C# side byte[] transfered_onettbase64 = Convert.FromBase64String("LkU2GOCy4lqwY4vtPI1JcsxgDgS2t05E6kYghuXjrQe7hSsYXETGdlhzEBlp+qhxzTXV3pw+AS5bEg9CPxqHus8fXHOnXYqsd2HL20QSaz+FjZee6Kvva0cGhWkFdWL+ANDSOWRWo/OMhm7JVqU3P/44c3dLA1eu2UsoDI26OMw="); string k = Convert.ToString(rsa.Decrypt(transfered_onettbase64, false)); // Bad Data exception == Exception while decrypting!!! Any ideas?

    Read the article

  • Which functions in the C standard library commonly encourage bad practice?

    - by Ninefingers
    Hello all, This is inspired by this question and the comments on one particular answer in that I learnt that strncpy is not a very safe string handling function in C and that it pads zeros, until it reaches n, something I was unaware of. Specifically, to quote R.. strncpy does not null-terminate, and does null-pad the whole remainder of the destination buffer, which is a huge waste of time. You can work around the former by adding your own null padding, but not the latter. It was never intended for use as a "safe string handling" function, but for working with fixed-size fields in Unix directory tables and database files. snprintf(dest, n, "%s", src) is the only correct "safe strcpy" in standard C, but it's likely to be a lot slower. By the way, truncation in itself can be a major bug and in some cases might lead to privilege elevation or DoS, so throwing "safe" string functions that truncate their output at a problem is not a way to make it "safe" or "secure". Instead, you should ensure that the destination buffer is the right size and simply use strcpy (or better yet, memcpy if you already know the source string length). And from Jonathan Leffler Note that strncat() is even more confusing in its interface than strncpy() - what exactly is that length argument, again? It isn't what you'd expect based on what you supply strncpy() etc - so it is more error prone even than strncpy(). For copying strings around, I'm increasingly of the opinion that there is a strong argument that you only need memmove() because you always know all the sizes ahead of time and make sure there's enough space ahead of time. Use memmove() in preference to any of strcpy(), strcat(), strncpy(), strncat(), memcpy(). So, I'm clearly a little rusty on the C standard library. Therefore, I'd like to pose the question: What C standard library functions are used inappropriately/in ways that may cause/lead to security problems/code defects/inefficiencies? In the interests of objectivity, I have a number of criteria for an answer: Please, if you can, cite design reasons behind the function in question i.e. its intended purpose. Please highlight the misuse to which the code is currently put. Please state why that misuse may lead towards a problem. I know that should be obvious but it prevents soft answers. Please avoid: Debates over naming conventions of functions (except where this unequivocably causes confusion). "I prefer x over y" - preference is ok, we all have them but I'm interested in actual unexpected side effects and how to guard against them. As this is likely to be considered subjective and has no definite answer I'm flagging for community wiki straight away. I am also working as per C99.

    Read the article

  • Project Euler #15

    - by Aistina
    Hey everyone, Last night I was trying to solve challenge #15 from Project Euler: Starting in the top left corner of a 2×2 grid, there are 6 routes (without backtracking) to the bottom right corner. How many routes are there through a 20×20 grid? I figured this shouldn't be so hard, so I wrote a basic recursive function: const int gridSize = 20; // call with progress(0, 0) static int progress(int x, int y) { int i = 0; if (x < gridSize) i += progress(x + 1, y); if (y < gridSize) i += progress(x, y + 1); if (x == gridSize && y == gridSize) return 1; return i; } I verified that it worked for a smaller grids such as 2×2 or 3×3, and then set it to run for a 20×20 grid. Imagine my surprise when, 5 hours later, the program was still happily crunching the numbers, and only about 80% done (based on examining its current position/route in the grid). Clearly I'm going about this the wrong way. How would you solve this problem? I'm thinking it should be solved using an equation rather than a method like mine, but that's unfortunately not a strong side of mine. Update: I now have a working version. Basically it caches results obtained before when a n×m block still remains to be traversed. Here is the code along with some comments: // the size of our grid static int gridSize = 20; // the amount of paths available for a "NxM" block, e.g. "2x2" => 4 static Dictionary<string, long> pathsByBlock = new Dictionary<string, long>(); // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static long progress(long x, long y) { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // create a textual representation of the remaining // block, for use in the dictionary string block = (gridSize - x) + "x" + (gridSize - y); // if a same block has not been processed before if (!pathsByBlock.ContainsKey(block)) { // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // and cache the result! pathsByBlock[block] = i; } // self-explanatory :) return pathsByBlock[block]; } Calling it 20 times, for grids with size 1×1 through 20×20 produces the following output: There are 2 paths in a 1 sized grid 0,0110006 seconds There are 6 paths in a 2 sized grid 0,0030002 seconds There are 20 paths in a 3 sized grid 0 seconds There are 70 paths in a 4 sized grid 0 seconds There are 252 paths in a 5 sized grid 0 seconds There are 924 paths in a 6 sized grid 0 seconds There are 3432 paths in a 7 sized grid 0 seconds There are 12870 paths in a 8 sized grid 0,001 seconds There are 48620 paths in a 9 sized grid 0,0010001 seconds There are 184756 paths in a 10 sized grid 0,001 seconds There are 705432 paths in a 11 sized grid 0 seconds There are 2704156 paths in a 12 sized grid 0 seconds There are 10400600 paths in a 13 sized grid 0,001 seconds There are 40116600 paths in a 14 sized grid 0 seconds There are 155117520 paths in a 15 sized grid 0 seconds There are 601080390 paths in a 16 sized grid 0,0010001 seconds There are 2333606220 paths in a 17 sized grid 0,001 seconds There are 9075135300 paths in a 18 sized grid 0,001 seconds There are 35345263800 paths in a 19 sized grid 0,001 seconds There are 137846528820 paths in a 20 sized grid 0,0010001 seconds 0,0390022 seconds in total I'm accepting danben's answer, because his helped me find this solution the most. But upvotes also to Tim Goodman and Agos :) Bonus update: After reading Eric Lippert's answer, I took another look and rewrote it somewhat. The basic idea is still the same but the caching part has been taken out and put in a separate function, like in Eric's example. The result is some much more elegant looking code. // the size of our grid const int gridSize = 20; // magic. static Func<A1, A2, R> Memoize<A1, A2, R>(this Func<A1, A2, R> f) { // Return a function which is f with caching. var dictionary = new Dictionary<string, R>(); return (A1 a1, A2 a2) => { R r; string key = a1 + "x" + a2; if (!dictionary.TryGetValue(key, out r)) { // not in cache yet r = f(a1, a2); dictionary.Add(key, r); } return r; }; } // calculate the surface of the block to the finish line static long calcsurface(long x, long y) { return (gridSize - x) * (gridSize - y); } // call using progress (0, 0) static Func<long, long, long> progress = ((Func<long, long, long>)((long x, long y) => { // first calculate the surface of the block remaining long surface = calcsurface(x, y); long i = 0; // zero surface means only 1 path remains // (we either go only right, or only down) if (surface == 0) return 1; // calculate it in the right direction if (x < gridSize) i += progress(x + 1, y); // and in the down direction if (y < gridSize) i += progress(x, y + 1); // self-explanatory :) return i; })).Memoize(); By the way, I couldn't think of a better way to use the two arguments as a key for the dictionary. I googled around a bit, and it seems this is a common solution. Oh well.

    Read the article

  • Wordpress 403/404 Errors: You don't have permission to access /wp-admin/themes.php on this server

    - by Glen
    Some background: I setup six blogs this week, all using Wordpress 2.92, installed with Fantastico on a baby croc plan with Hostgator. I used the same theme (heatmap 2.5.4) and plugins for each blog. They were all up and running, no issues at all. I go to create a new blog this morning, using the same setup, and when I try to change the theme settings, I get the following error: Forbidden You don't have permission to access /wp-admin/themes.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8n DAV/2 mod_fcgid/2.3.5 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 Server at http://www.mydomain.com Port 80 I tried uninstalling WP and doing a clean install, still the same issue with a clean installation. So I went back and checked the six other blogs that I had setup over the last week or so, and they are also now giving me 403 or 404 errors when trying to change theme settings, and everytime there's an error it points to either themes.php or functions.php At this point I'm at my wits end trying to figure out what the problem is. Hostgator support looked at it and thought maybe it was a permissions issue but they reset those and I'm still having the problem. At first I thought the problem might have been related to a plugin I recently installed on the previous six blogs that morning (ByREV Fix Missed Shedule Plugin) to deal with a missed schedule bug with WP 2.92, and that maybe that had mucked things up. But then I checked a blog I built months ago, also using the same theme and plugins, and now it too is also encountering the same problem. Any ideas? I tried deleting my htaccess, uploading a blank one, uploading one with this snippet I found on the hostgator forum: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Nothing has worked. I still get 403 or 404 errors everytime. Everything was working perfectly yesterday so I know this setup DOES WORK, I've just mucked something up somewhere and I'm clueless what it is. I read a related thread here and tried chmoding the wp-content folder to 0755 and still having the issue. Any thoughts? Thanks!

    Read the article

  • problem with template inheritance

    - by amiad
    I'm trying to understand whay i get an error on this code: (the error is under g++ unix compiler. VS is compiling OK) template<class T> class A { public: T t; public: A(const T& t1) : t(t1) {} virtual void Print() const { cout<<*this<<endl;} friend ostream& operator<<(ostream& out, const A<T>& a) { out<<"I'm "<<typeid(a).name()<<endl; out<<"I hold "<<typeid(a.t).name()<<endl; out<<"The inner value is: "<<a.t<<endl; return out; } }; template<class T> class B : public A<T> { public: B(const T& t1) : A<T>(t1) {} const T& get() const { return t; } }; int main() { A<int> a(9); a.Print(); B<A<int> > b(a); b.Print(); (b.get()).Print(); return 0; } This code is giving the following error: main.cpp: In member function 'const T& B::get() const': main.cpp:23: error: 't' was not declared in this scope It did compiled when i changed the code of B to this: template<class T> class B : public A<T> { public: B(const T& t1) : A<T>(t1) {} const T& get() const { return A<T>::t; } }; I just cant understand what is the problem with the first code... It doesn't make sense that i really need to write "A::" every time...

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Asynchronous subprocess on Windows

    - by Stigma
    First of all, the overall problem I am solving is a bit more complicated than I am showing here, so please do not tell me 'use threads with blocking' as it would not solve my actual situation without a fair, FAIR bit of rewriting and refactoring. I have several applications which are not mine to modify, which take data from stdin and poop it out on stdout after doing their magic. My task is to chain several of these programs. Problem is, sometimes they choke, and as such I need to track their progress which is outputted on STDERR. pA = subprocess.Popen(CommandA, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE) # ... some more processes make up the chain, but that is irrelevant to the problem pB = subprocess.Popen(CommandB, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=pA.stdout ) Now, reading directly through pA.stdout.readline() and pB.stdout.readline(), or the plain read() functions, is a blocking matter. Since different applications output in different paces and different formats, blocking is not an option. (And as I wrote above, threading is not an option unless at a last, last resort.) pA.communicate() is deadlock safe, but since I need the information live, that is not an option either. Thus google brought me to this asynchronous subprocess snippet on ActiveState. All good at first, until I implement it. Comparing the cmd.exe output of pA.exe | pB.exe, ignoring the fact both output to the same window making for a mess, I see very instantaneous updates. However, I implement the same thing using the above snippet and the read_some() function declared there, and it takes over 10 seconds to notify updates of a single pipe. But when it does, it has updates leading all the way upto 40% progress, for example. Thus I do some more research, and see numerous subjects concerning PeekNamedPipe, anonymous handles, and returning 0 bytes available even though there is information available in the pipe. As the subject has proven quite a bit beyond my expertise to fix or code around, I come to Stack Overflow to look for guidance. :) My platform is W7 64-bit with Python 2.6, the applications are 32-bit in case it matters, and compatibility with Unix is not a concern. I can even deal with a full ctypes or pywin32 solution that subverts subprocess entirely if it is the only solution, as long as I can read from every stderr pipe asynchronously with immediate performance and no deadlocks. :)

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • Stack Overflow Exploit in C

    - by Fernando Gonzalez
    Hey there guys, the question is actually about stack overflows in C. I have an assigment that I can not get done for the life of me, i've looked at everything in the gdb and I just cant figure it. The question is the following: int i,n; void confused() { printf("who called me"); exit(0); } void shell_call(char *c) { printf(" ***Now calling \"%s\" shell command *** \n",c); system(c); exit(0); } void victim_func() { int a[4]; printf("[8]:%x\n", &a[8]); printf("Enter n: "); scanf("%d",&n); printf("Enter %d HEX Values \n",n); for(i=0;i<n;i++) scanf("%x",&a[i]); printf("Done reading junk numbers\n"); } int main() { printf("ls=736c --- ps = 7370 --- cal = 6c6163\n"); printf("location of confused %x \n", confused); printf("location of shell_call %x \n", shell_call); victim_func(); printf("Done, thank you\n"); } Ok, so I managed to get the first question correctly, which is to arbitrarily call one of the two functions not explicitly called in the main path. By the way, this has to be done while running the program without any modifications. I did this by running the program, setting N to 7, which gets me to the Function Pointer of the victim_func frame, I write a[7] with the memory address of confused or shell_call, and it works. (I have a 64 bit machine, thats why I have to get it to 7, since the EBI pointer is 2 ints wide, instead of 1) My question is the following, how could I control which argument gets passed to the shell_code funcion? ie. how do i write a string to char* c. The whole point is executing unix commands like "ps" etc, by running only the program. I figured writing the EBI pointer with the hex representation of "ps" and setting the arg list of shell_call to that, but that didn't work. I also tried inputing argsv arguments and setting the arg list of shell_call to the arg_list of main, but didnt work either. I think the second version should work, but i believe im not setting the arg list of the new stack frame correctly ( I did it by writing a[8] to 0, since its the first part of the functin pointer, and writing a[9]=736c and a[10]=0000, but its probably not right since those are the parameters of victim_func. So how do i access the parameters of shell_call?

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • ImageMagick on Mac OSX Snow Leopard. Is there any way to get it to work?

    - by ?????
    It seems that I have more trouble getting standard Unix things to run on Snow Leopard than any other platform--including Windows cygwin For the past couple of days, I've been trying to get ImageMagick to run on Snow Leopard. The most obvious way, Mac Ports, fails: tppllc-Mac-Pro:ImageMagick-sl swirsky$ sudo port install imagemagick ---> Computing dependencies for p5-locale-gettext ---> Configuring p5-locale-gettext Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_perl_p5-locale-gettext/work/gettext-1.05" && /opt/local/bin/perl Makefile.PL INSTALLDIRS=vendor " returned error 2 Command output: checking for gettext... no checking for gettext in -I/opt/local/include -arch i386 -L/opt/local/lib -lintl...gettext function not found. Please install libintl at Makefile.PL line 18. no Error: Unable to upgrade port: 1 Error: Unable to execute port: upgrade xorg-libXt failed Before reporting a bug, first run the command again with the -d flag to get complete output. tppllc-Mac-Pro:ImageMagick-sl swirsky$ Not wanting to spend another two days figuring out why my libintl doesn't have a "gettext" function, I tried a different route: the script mentioned here: http://github.com/masterkain/ImageMagick-sl This script downloads and installs an ImageMagic independently of MacPorts issues tppllc-Mac-Pro:ImageMagick-sl swirsky$ /usr/local/bin/convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap It downloads everything and compiles fine, but fails when I try to run it, with the message above. So now I'm two steps away from ImageMagick, trying to get a newer libiconv on my machine. I downloaded the latest libiconv, compiled and built it. I put the resulting library in /opt/local/lib, and I still get the same error message: tppllc-Mac-Pro:.libs swirsky$ sudo mv libiconv.2.dylib /opt/local/lib/libiconv.2.dylib tppllc-Mac-Pro:.libs swirsky$ convert dyld: Library not loaded: /opt/local/lib/libiconv.2.dylib Referenced from: /opt/local/lib/libfontconfig.1.dylib Reason: Incompatible library version: libfontconfig.1.dylib requires version 8.0.0 or later, but libiconv.2.dylib provides version 7.0.0 Trace/BPT trap So here's my question: Is it possible to get ImageMagick to run on OSX Snow Leopard? Are there any binary distributions that have static libraries baked in so I don't have to worry about these issue/

    Read the article

  • Git/Mercurial (hg) opinion

    - by Richard
    First, let me say I'm not a professional programmer, but an engineer who had a need for it and had to learn. I was always working alone, so it was just me and my seven splitted personalities ... and we worked okey as a team :) Most of my stuff is done in C/Fortran/Matlab and so far I've been learning git to manage it all. However, although I've had no unsolvable problems with it, I've never been "that" happy with it ... for everything I cannot do, I hae to look up a book. And, for some time now I've been hearning a lot of good stuff about hg. Now, a coleague of mine will have to work with me on a project (I almost feel sorry for him) and he's started learning hg (says he likes it more), and I'm considering the switch myself. We work almost exclusivly on Windows platform (although I manage relatively ok using unix tools and things that come from that part of the world). So, I was wondering, in a described scenario, what problems could I expect with the switch. I heard that hg is rather more user friendly towards windows users, regarding the user interfaces. How does it handle repositories ? Does it create them the same way as git does (just one subdirectory in a working directory) and can I just copy the whole project directory (including git repo) and just carry them somewhere with no extra thinking ? (I really liked that when I was choosing over git/svn). Are there any good books on it that you can recommend (something like Pro Git, only for Hg). What are good ways to implement hg into Visual Studio/GVim for Windows, or into Windows Explorer so I can work relatively easily (I would like to avoid using the command line for everything regarding it, like in git shell). Is there something else I should be aware of (please, on this don't point me to other questions ... they just give me a ton of info, and I'm not sure what is it that I should take as important, and what to disregard). I'm trying to cut some time, since I cannot spend all that time relearning hg, like I did for git. I've also heard git is c project, while mercurial is python ... is there any noticeable difference in speed. git was pretty speedy ... will I encounter some waiting while working. Notice: All my projects are of let's say, middle size ... mostly numerical simulations ... 10-15000 lines (medium size?)

    Read the article

  • Help choosing authentication method

    - by Dima
    I need to choose an authentication method for an application installed and integrated in customers environment. There are two types of environments - windows and linux/unix. Application is user based, no web stuff, pure Java. The requirement is to authenticate users which will use my application against customer provided user base. Meaning, customer installs my app, but uses his own users to grant or deny access to my app. Typical, right? I have three options to consider and I need to pick up the one which would be a) the most flexible to cover most common modern environments and b) would take least effort while stay robust and standard. Option (1) - Authenticate locally managing user credentials in some local storage, e.g. file. Customer would then add his users to my application and it will then check the passwords. Simple, clumsy but would work. Customers would have to punch every user they want to grant access to my app using some UI we will have to provide. Lots of work for me, headache to the customer. Option (2) - Use LDAP authentication. Customers would tell my app where to look for users and I will walk their directory resolving names into user names and trying to bind with found password. This is better approach IMO, but more fragile because I will have to walk an unknown directory structure and who knows if this will be permitted everywhere. Would be harder to test since there are many LDAP implementation out there, last thing I want is drowning in this voodoo. Option(3) - Use plain Kerberos authentication. Customers would tell my app what realm (domain) and which KDC (key distribution center) to use. In ideal world these two parameters would be all I need to set while customers could use their own administration tools to configure domain and kdc. My application would simply delegate user credentials to this third party (using JAAS or Spring security) and consider success when third party is happy with them. I personally prefer #3, but not sure what surprises I might face. Would this cover windows and *nix systems entirely? Is there another option to consider?

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

< Previous Page | 603 604 605 606 607 608 609 610 611 612 613 614  | Next Page >