Search Results

Search found 38336 results on 1534 pages for 'sql wait types'.

Page 833/1534 | < Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >

  • Anything new for WinForms in .NET 4.0

    - by Robert
    I could not find any information about new WinForm features, exept for this blog post: http://blog.codinglight.com/2009/05/future-of-winforms-whats-changed-in.html which states: 213 types were changed, and 9 types were added. 596 methods were changed, 50 were added, and 8 were removed. So whats in these changes, for joe developer?

    Read the article

  • Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

    - by Chris Reid
    I have a Pylons project and a SQLAlchemy model that implements schema qualified tables: class Hockey(Base): __tablename__ = "hockey" __table_args__ = {'schema':'winter'} hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True) baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')) This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support) sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' () I'd like to continue using SQLite for dev and testing. Is there a way of have this fail gracefully on SQLite?

    Read the article

  • Hibernate: Dirty Checking and Only Update of Dirty Attributes?

    - by jens
    Hello Experts, in "good old JDBC days" I wrote a lot of SQL Queries that did very targeted updates of only the "attributes/members" that were actually changed: For Example having an object with the following members: public String name; public String address; public Date date; If only date was changed in some Business Method I would only issue an SQL UPDATE for the date member. ==It seems however (thats my "impression" of hibernate) that when working with a standard Hibernate mapping (mapping the full class), even updates of only one single member lead to a full update of the object in SQL Statements generated by Hibernate. My Questions are: 1.) Is this observation correct, that hibernate DOES NOT intelligently check (in a fully mapped class), what member(s) where changed and then only issue updates for the specific changed members, but rather always will update (in the generated SQL Update Statement) all mapped members (of a class), even if they were not changed (in case the object is dirty due to one member being dirty...) 2.) What can I do to make Hibernate only update those members, that have been changed? I am searching for a solution to have hibernate only update the member that actually changed. (I know hibernate does some big work on doing dirty-checking, but as far as I know this dirtychecking is only relevant to identify if the object as whole is dirty, not what single member is dirty.) Thank you very much! Jens

    Read the article

  • change custom mapping - sharp architecture/ fluent nhibernate

    - by csetzkorn
    I am using the sharp architecture which also deploys FNH. The db schema sql code is generated during the testing like this: [TestFixture] [Category("DB Tests")] public class MappingIntegrationTests { [SetUp] public virtual void SetUp() { string[] mappingAssemblies = RepositoryTestsHelper.GetMappingAssemblies(); configuration = NHibernateSession.Init( new SimpleSessionStorage(), mappingAssemblies, new AutoPersistenceModelGenerator().Generate(), "../../../../app/XXX.Web/NHibernate.config"); } [TearDown] public virtual void TearDown() { NHibernateSession.CloseAllSessions(); NHibernateSession.Reset(); } [Test] public void CanConfirmDatabaseMatchesMappings() { var allClassMetadata = NHibernateSession.GetDefaultSessionFactory().GetAllClassMetadata(); foreach (var entry in allClassMetadata) { NHibernateSession.Current.CreateCriteria(entry.Value.GetMappedClass(EntityMode.Poco)) .SetMaxResults(0).List(); } } /// <summary> /// Generates and outputs the database schema SQL to the console /// </summary> [Test] public void CanGenerateDatabaseSchema() { System.IO.TextWriter writeFile = new StreamWriter(@"d:/XXXSqlCreate.sql"); var session = NHibernateSession.GetDefaultSessionFactory().OpenSession(); new SchemaExport(configuration).Execute(true, false, false, session.Connection, writeFile); } private Configuration configuration; } I am trying to use: using FluentNHibernate.Automapping; using xxx.Core; using SharpArch.Data.NHibernate.FluentNHibernate; using FluentNHibernate.Automapping.Alterations; namespace xxx.Data.NHibernateMaps { public class x : IAutoMappingOverride<x> { public void Override(AutoMapping<Tx> mapping) { mapping.Map(x => x.text, "text").CustomSqlType("varchar(max)"); mapping.Map(x => x.url, "url").CustomSqlType("varchar(max)"); } } } To change the standard mapping of strings from NVARCHAR(255) to varchar(max). This is not picked up during the sql schema generation. I also tried: mapping.Map(x = x.text, "text").Length(100000); Any ideas? Thanks. Christian

    Read the article

  • ibatis problem using <isNull> whilst iterating over a List

    - by onoma
    Hi, I'm new to iBatis and I'm struggling with the and elements. I want to iterate over a List of Book instances (say) that are passed in as a HashMap: MyParameters. The list will be called listOfBooks. The clause of the overall SQL statement will therefore look like this: <iterate prepend="AND" property="MyParameters.listOfBooks" conjunction="AND" open="(" close=")"> ... </iterate> I also need to produce different SQL within the iterate elements depending on whether a property of each Book instance in the 'listOfBooks' List is null, or not. So, I need a statement something like this: <iterate prepend="AND" property="MyParameters.listOfBooks" conjunction="AND" open="(" close=")"> <isNull property="MyParameter.listOfBooks.title"> <!-- SQL clause #1 here --> </isNull> <isNotNull property="MyParameter.listOfBooks.title"> <!-- SQL clause #2 here --> </isNotNull> When I do this I get an error message stating that there is no "READABLE" property named 'title' in my Book class. However, each Book instance does contain a title property, so I'm confused! I can only assuem that I have managled the syntax in trying to pinpoint the title of particular Book instance in listOfBooks. I'm struggling to find the correct technique for trying to achieve this. If anyone can advise a way forward I'd be grateful. Thanks

    Read the article

  • Stored procedure strange error when called through php

    - by ravi
    I have been coding a registration page(login system) in php and mysql for a website. I'm using two stored procedures for the same. First stored procedure checks wether the email address already exists in database.Second one inserts the user supplied data into mysql database. User has EXECUTE permission on both the procedures.When is execute them individually from php script they work fine. But when i use them together in script second Stored procedure(insert) not working. Stored procedure 1. DELIMITER $$ CREATE PROCEDURE reg_check_email(email VARCHAR(80)) BEGIN SET @email = email; SET @sql = 'SELECT email FROM user_account WHERE user_account.email=?'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email; END$$ DELIMITER; Stored procedure 2 DELIMITER $$ CREATE PROCEDURE reg_insert_into_db(fname VARCHAR(40), lname VARCHAR(40), email VARCHAR(80), pass VARBINARY(32), licenseno VARCHAR(80), mobileno VARCHAR(10)) BEGIN SET @fname = fname, @lname = lname, @email = email, @pass = pass, @licenseno = licenseno, @mobileno = mobileno; SET @sql = 'INSERT INTO user_account(email,pass,last_name,license_no,phone_no) VALUES(?,?,?,?,?)'; PREPARE stmt FROM @sql; EXECUTE stmt USING @email,@pass,@lname,@licenseno,@mobileno; END$$ DELIMITER; When i test these from php sample script insert is not working , but first stored procedure(reg_check_email()) is working. If i comment off first one(reg_check_email), second stored procedure(reg_insert_into_db) is working fine. <?php require("/wamp/mysql.inc.php"); $r = mysqli_query($dbc,"CALL reg_check_email('[email protected]')"); $rows = mysqli_num_rows($r); if($rows == 0) { $r = mysqli_query($dbc,"CALL reg_insert_into_db('a','b','[email protected]','c','d','e')"); } ?> i'm unable to figure out the mistake. Thanks in advance, ravi.

    Read the article

  • Tomcat stops responding to JK requests

    - by Bruno Reis
    Hello. I have a nasty issue with load-balanced Tomcat servers that are hanging up. Any help would be greatly appreciated. The system I'm running Tomcat 6.0.26 on HotSpot Server 14.3-b01 (Java 1.6.0_17-b04) on three servers sitting behind another server that acts as load balancer. The load balancer runs Apache (2.2.8-1) + MOD_JK (1.2.25). All of the servers are running Ubuntu 8.04. The Tomcat's have 2 connectors configured: an AJP one, and a HTTP one. The AJP is to be used with the load balancer, while the HTTP is used by the dev team to directly connect to a chosen server (if we have a reason to do so). I have Lambda Probe 1.7b installed on the Tomcat servers to help me diagnose and fix the problem soon to be described. The problem Here's the problem: after about 1 day the application servers are up, JK Status Manager starts reporting status ERR for, say, Tomcat2. It will simply get stuck on this state, and the only fix I've found so far is to ssh the box and restart Tomcat. I must also mention that JK Status Manager takes a lot longer to refresh when there's a Tomcat server in this state. Finally, the "Busy" count of the stuck Tomcat on JK Status Manager is always high, and won't go down per se -- I must restart the Tomcat server, wait, then reset the worker on JK. Analysis Since I have 2 connectors on each Tomcat (AJP and HTTP), I still can connect to the application through the HTTP one. The application works just fine like this, very, very fast. That is perfectly normal, since I'm the only one using this server (as JK stopped delegating requests to this Tomcat). To try to better understand the problem, I've taken a thread dump from a Tomcat which is not responding anymore, and from another one that has been restarted recently (say, 1 hour before). The instance that is responding normally to JK shows most of the TP-ProcessorXXX threads in "Runnable" state, with the following stack trace: java.net.SocketInputStream.socketRead0 ( native code ) java.net.SocketInputStream.read ( SocketInputStream.java:129 ) java.io.BufferedInputStream.fill ( BufferedInputStream.java:218 ) java.io.BufferedInputStream.read1 ( BufferedInputStream.java:258 ) java.io.BufferedInputStream.read ( BufferedInputStream.java:317 ) org.apache.jk.common.ChannelSocket.read ( ChannelSocket.java:621 ) org.apache.jk.common.ChannelSocket.receive ( ChannelSocket.java:559 ) org.apache.jk.common.ChannelSocket.processConnection ( ChannelSocket.java:686 ) org.apache.jk.common.ChannelSocket$SocketConnection.runIt ( ChannelSocket.java:891 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:690 ) java.lang.Thread.run ( Thread.java:619 ) The instance that is stuck show most (all?) of the TP-ProcessorXXX threads in "Waiting" state. These have the following stack trace: java.lang.Object.wait ( native code ) java.lang.Object.wait ( Object.java:485 ) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run ( ThreadPool.java:662 ) java.lang.Thread.run ( Thread.java:619 ) I don't know of the internals of Tomcat, but I would infer that the "Waiting" threads are simply threads sitting on a thread pool. So, if they are threads waiting inside of a thread pool, why wouldn't Tomcat put them to work on processing requests from JK? Solution? So, as I've stated before, the only fix I've found is to stop the Tomcat instance, stop the JK worker, wait the latter's busy count slowly go down, start Tomcat again, and enable the JK worker once again. What is causing this problem? How should I further investigate it? What can I do to solve it? Thanks in advance.

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • Cakephp Function in mode not executing

    - by Rixius
    I have a function in my Comic Model as such: <?php class Comic extends AppModel { var $name = "Comic"; // Methods for retriving information. function testFunc(){ $mr = $this->find('all'); return $mr; } } ?> And I am calling it in my controller as such: <?php class ComicController extends AppController { var $name = "Comic"; var $uses = array('Comic'); function index() { } function view($q) { $this->set('array',$this->Comic->testFunc()); } } ?> When I try to load up the page; I get the following error: Warning (512): SQL Error: 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'testFunc' at line 1 [CORE/cake/libs/model/datasources/dbo_source.php, line 525] Query: testFunc And the SQL dump looks like this: (default) 2 queries took 1 ms Nr Query Error Affected Num. rows Took (ms) 1 DESCRIBE comics 10 10 1 2 testFunc 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'testFunc' at line 1 0 So it looks like, instead of running the testFunc() function, it is trying to run a query of "testFunc" and failing...

    Read the article

  • Place Query Results into Array then Implode?

    - by jason
    Basically I pull an Id from table1, use that id to find a site id in table2, then need to use the site ids in an array, implode, and query table3 for site names. I cannot implode the array correctly first I got an error, then used a while loop. With the while loop the output simply says: Array $mysqli = mysqli_connect("server", "login", "pass", "db"); $sql = "SELECT MarketID FROM marketdates WHERE Date = '2010-04-04 00:00:00' AND VenueID = '2'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); $dates_id = mysqli_fetch_assoc ( $result ); $comma_separated = implode(",", $dates_id); echo $comma_separated; //This Returns 79, which is correct. $sql = "SELECT SIteID FROM bookings WHERE BSH_ID = '1' AND MarketID = '$comma_separated'"; $result = mysqli_query($mysqli, $sql) or die(mysqli_error($mysqli)); // This is where my problems start $SIteID = array(); while ($newArray = mysqli_fetch_array($result, MYSQLI_ASSOC)) { $SIteID[] = $newArray[SIteID]; } $locationList = implode(",",$SIteID); ?> Basically what I need to do is correctly move the query results to an array that I can implode and use in a 3rd query to pull names from table3.

    Read the article

  • Group Policy processing and autologon on Windows 7

    - by Jason Berg
    I'm trying to accomplish a few things via Group Policy on Windows 7. Software Installation, map drives, map printer, etc. I've got these computers set to autologon. The problem I'm running into is that the computers logon before DHCP has done its thing. Therefore, they don't apply any group policies properly. How do I fix this? I've already set a policy to "Always wait for the network at computer startup and logon". I've read up a bit and this doesn't actually mean that it waits for DHCP. So it's a little pointless. Anything that would delay logon would work. Or if I can somehow make the computer wait for DHCP.

    Read the article

  • Maintaining content type pk integrity in a Django deployment

    - by hekevintran
    When you run syncdb in Django, the primary keys of the content types will be recomputed. If I create new models, the next time I run syncdb, the primary keys of the content types will be different. If I have an application running in production, how can I update the database with the new models and keep the integrity of content type pks?

    Read the article

  • PHP MySQL database problem

    - by Jordan Pagaduan
    Code 1: <?php class dbConnect { var $dbHost = 'localhost', $dbUser = 'root', $dbPass = '', $dbName = 'input_oop', $dbTable = 'users'; function __construct() { $dbc = mysql_connect($this->dbHost,$this->dbUser,$this->dbPass) or die ("Cannot connect to MySQL : " . mysql_error()); mysql_select_db($this->dbName) or die ("Database not Found : " . mysql_error()); } } class User extends dbConnect { var $name; function userInput($q) { $sql = "INSERT INTO $this->dbTable set name = '".$q."'"; mysql_query($sql) or die (mysql_error()); } } ?> Code 2: <?php $q = $_GET['q']; $dbc=mysql_connect("localhost","root","") or die (mysql_error()); mysql_select_db('input_oop') or die (mysql_error()); $sql = "INSERT INTO users set name = '".$q."'"; mysql_query($sql) or die (mysql_error()); ?> My Code 1 save in my database: Saving Multiple! My Code 2 save in my database: What is wrong with my code 1?

    Read the article

  • Terminal Services - MS Access Frequently "Not Responding"

    - by jonfhancock
    Exposition: We use a program built in MS Access that I serve via Terminal Services. I just installed a new TS Server with a Quad Core 2.6GHz Xeon, 8GB RAM, and 4 SATA drives in a RAID 0. In installed Server 2008 R2 (64bit obviously). It's only role is TS. The problem: With just a few sessions (under 10), I start getting frequent Not Responding messages in each session. When it happens, the users aren't doing anything particularly taxing, just form navigation and simple insert queries. I can live with some stalls, but it is visually jarring in WS08 because the screen goes gray, and it presents a dialog offering to wait or close with some other options. Questions: Any suggestions for improving performance and reducing hangs? Is it possible to disable the dialog (always wait) and screen graying?

    Read the article

  • SOAP web service evolution

    - by Thilo
    Are there any guidelines/tutorials as to how to handle the evolution of a SOAP web service? I can see that changing existing methods or types would probably not work, but can I just add new methods, complex types, enumeration values without breaking existing clients?

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • How does one convert from a Java resultset to ColdFusion query in Railo?

    - by Shawn Grigson
    The following works fine in CFMX 7 and CF8, and I'd assume CF9 as well: <!--- 'conn' is a JDBC connection ---> <cfset stat = conn.createStatement() /> <cfset rs = stat.executeQuery(trim(arguments.sql)) /> <!--- convert this Java resultset to a CF query recordset ---> <cfset queryTable = CreateObject("java", "coldfusion.sql.QueryTable")> <cfset queryTable.init(rs) > <cfset query = queryTable.FirstTable() /> This creates a statement using a JDBC driver, executes a query against it, putting it into a java resultset, and then coldfusion.sql.QueryTable is instantiated, passed the Java resulset object, and then queryTable.FirstTable() is called, which returns an actual coldfusion resultset (for cfloop and the like). The problem comes with a difference in Railo's implementation. Running this code in Railo returns the following error: No matching Constructor for coldfusion.sql.QueryTable(org.sqlite.RS) found. I've dumped the Railo java object, and don't see init() among the methods. Am I missing something simple? I'd love to get this working in Railo as well. Please note: I am doing a DSN-less connection to a SQLite db. I understand how to set up a CF datasource. My only hiccup at this point is doing the translation from a Java result set to a Railo query.

    Read the article

  • How to compile ocaml to native code

    - by Indra Ginanjar
    i'm really interested learning ocaml, it fast (they said it could be compiled to native code) and it's functional. So i tried to code something easy like enabling mysql event scheduler. #load "unix.cma";; #directory "+mysql";; #load "mysql.cma";; let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; It work fine on ocaml interpreter, but when i was trying to compile it to native (i'm using ubuntu karmic), neither of these command worked ocamlopt -o mysqleventon mysqleventon.ml unix.cmxa mysql.cmxa ocamlopt -o mysqleventon mysqleventon.ml unix.cma mysql.cma i also tried ocamlc -c mysqleventon.ml unix.cma mysql.cma all of them resulting same message File "mysqleventon.ml", line 1, characters 0-1: Error: Syntax error Then i tried to remove the "# load", so the code goes like this let db = Mysql.quick_connect ~user:"username" ~password:"userpassword" ~database:"databasename"();; let sql = Printf.sprintf "SET GLOBAL EVENT_SCHEDULER=1;" in (Mysql.exec db sql);; The ocamlopt resulting message File "mysqleventon.ml", line 1, characters 9-28: Error: Unbound value Mysql.quick_connect I hope someone could tell me, where did i'm doing wrong.

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • Looking for MDI Manager with tab grouping that allows show and hide of groups?

    - by Jeff Lundstrom
    I am looking for a MDI manager solution that allows documents to be grouped and show/hidden programmaticly. Example, 3 document types, red, yellow and green. When you click a button the MDI manager shows only the red documents by hiding the other 2 types tabs. None of the MDI managers (Actipro, Infragistics, etx) I have looked at can do this. They require all documents to be visible... Anyone know of a good solution for this in C#? Thanks, Jeff

    Read the article

  • PDO fails to prepare a statement with over 13 placeholders

    - by Javier Parra
    Hello, this is the code I'm using: self::$DB->prepare($query, $types); when the $query and types are: //$query UPDATE Permisos SET empleado_id = ?, agregar_mensaje = ?, borrar_mensaje = ?, agregar_noticia = ?, borrar_noticia = ?, agregar_documento = ?, borrar_documento = ?, agregar_usuario = ?, borrar_usuario = ?, agregar_empresa = ?, borrar_empresa = ?, agregar_tarea = ? WHERE id = ? //$types Array( [0] => integer [1] => boolean [2] => boolean [3] => boolean [4] => boolean [5] => boolean [6] => boolean [7] => boolean [8] => boolean [9] => boolean [10] => boolean [11] => boolean [12] => integer ) Everything works great, but when they are: //$query UPDATE Permisos SET empleado_id = ?, agregar_mensaje = ?, borrar_mensaje = ?, agregar_noticia = ?, borrar_noticia = ?, agregar_documento = ?, borrar_documento = ?, agregar_usuario = ?, borrar_usuario = ?, agregar_empresa = ?, borrar_empresa = ?, agregar_tarea = ?, borrar_tarea = ? WHERE id = ? //$types Array( [0] => integer [1] => boolean [2] => boolean [3] => boolean [4] => boolean [5] => boolean [6] => boolean [7] => boolean [8] => boolean [9] => boolean [10] => boolean [11] => boolean [12] => boolean [13] => integer ) It fails with the following message: <b>Warning</b>: PDO::prepare() [<a href='pdo.prepare'>pdo.prepare</a>]: SQLSTATE[HY000]: General error: PDO::ATTR_STATEMENT_CLASS requires format array(classname, array(ctor_args)); the classname must be a string specifying an existing class in <b>C:\wamp\www\intratin\JP\includes\empleado\mapper\Permiso.php</b> on line <b>137</b><br /> Doesn't matter which field I add or remove, it fails every time with more than 13 placeholders.

    Read the article

  • Drupal 7: File field causes error with Dependable Dropdowns

    - by LoneWolfPR
    I'm building a Form in a module using the Form API. I've had a couple of dependent dropdowns that have been working just fine. The code is as follows: $types = db_query('SELECT * FROM {touchpoints_metric_types}') -> fetchAllKeyed(0, 1); $types = array('0' => '- Select -') + $types; $selectedType = isset($form_state['values']['metrictype']) ? $form_state['values']['metrictype'] : 0; $methods = _get_methods($selectedType); $selectedMethod = isset($form_state['values']['measurementmethod']) ? $form_state['values']['measurementmethod'] : 0; $form['metrictype'] = array( '#type' => 'select', '#title' => t('Metric Type'), '#options' => $types, '#default_value' => $selectedType, '#ajax' => array( 'event' => 'change', 'wrapper' => 'method-wrapper', 'callback' => 'touchpoints_method_callback' ) ); $form['measurementmethod'] = array( '#type' => 'select', '#title' => t('Measurement Method'), '#prefix' => '<div id="method-wrapper">', '#suffix' => '</div>', '#options' => $methods, '#default_value' => $selectedMethod, ); Here are the _get_methods and touchpoints_method_callback functions: function _get_methods($selected) { if ($selected) { $methods = db_query("SELECT * FROM {touchpoints_m_methods} WHERE mt_id=$selected") -> fetchAllKeyed(0, 2); } else { $methods = array(); } $methods = array('0' => "- Select -") + $methods; return $methods; } function touchpoints_method_callback($form, &$form_state) { return $form['measurementmethod']; } This all worked fine until I added a file field to the form. Here is the code I used for that: $form['metricfile'] = array( '#type' => 'file', '#title' => 'Attach a File', ); Now that the file is added if I change the first dropdown it hangs with the 'Please wait' message next to it without ever loading the contents of the second dropdown. I also get the following error in my JavaScript console: "Uncaught TypeError: Object function (a,b){return new p.fn.init(a,b,c)} has no method 'handleError'" What am I doing wrong here?

    Read the article

  • Dynamic Typed Table/Model in J2EE?

    - by Viele
    Hi, Usually with J2EE when we create Model, we define the fields and types of fields through XML or annotation before compilation time. Is there a way to change those in runtime? or better, is it possible to create a new Model based on the user's input during the runtime? such that the number of columns and types of fields are dynamic (determined at runtime)? Help is much appreciated. Thank you.

    Read the article

< Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >