Search Results

Search found 58653 results on 2347 pages for 'transactional data'.

Page 883/2347 | < Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >

  • Saving twice don't update my object in JDO

    - by Javi
    Hello I have an object persisted in the GAE datastore using JDO. The object looks like this: public class MyObject implements Serializable, StoreCallback { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true") private String id; @Persistent private String firstId; ... } As usually when the object is stored for the first time a new id value is generated for the identifier. I need that if I don't provide a value for firstId it sets the same value as the id. I don't want to solve it with a special getter which checks for null value in firstId and then return the id value because I want to make queries relating on firstId. I can do it in this way by saving the object twice (Probably there's a better way to do this, but I'll do it in this way until I find a better one). But it is not working. when I debug it I can see that result.firstId is set with the id value and it seems to be persisted, but when I go into the datastore I see that firstId is null (as it was saved the first time). This save method is in my DAO and it is called in another save method in the service annotated with @Transactional. Does anyone have any idea why the second object in not persisted properly? @Override public MyObject save(MyObject obj) { PersistenceManager pm = JDOHelper.getPersistenceManagerFactory("transactions-optional"); MyObject result = pm.makePersistent(obj); if(result.getFirstId() == null){ result.setFirstId(result.getId()); result = pm.makePersistent(result); } return result; } Thanks.

    Read the article

  • hex value in field/row terminator for bulk insert

    - by TheObserver
    I'm running SQL Server 2005 Express. And I'm trying to do a bulk insert/import of a data file with a field/row terminator that uses a hexadecimal value 0x001. How should I represent it in a bulk insert command? I have something like: bulk insert xxx.dbo.[yyy] from 'D:\zzz\zzz.dat' with ( CODEPAGE='RAW', FIELDTERMINATOR = '=|=', ROWTERMINATOR = '=|=\001\n', KEEPNULLS ); which results in Msg 4863, Level 16, State 1, Line 7 Bulk load data conversion error (truncation) for row 1, column 3 (code).

    Read the article

  • Hibernate CRUD à la Ruby on Rails' Scaffolding

    - by schonarth
    Guys, Do you know of any tool that would do like Ruby on Rails' Scaffolding (create simple CRUD pages for any particular class to allow quickly populating a database with dummy data), only which used Java classes with Hibernate for database access, and JSP/JSF for the pages? It is a drag when you are programming one part of an application, but need data that can only be added with another part that is not ready yet, or very cumbersomely by directly inserting it into the DB.

    Read the article

  • filesystem compatible with freenas and windows

    - by Daniel
    Hi all, I'm planning on using FreeNAS (was considering openfiler but freenas seems simpler) for my home NAS box running off ESXI. I have managed to get local sata drives to mount in ESXI (http://serverfault.com/questions/216902/esxi-add-datastore-without-partitioning). I've had one of the drives fail on my before and I was able to retrieve most of the data off it using windows tools (I'm not much of a linux guy I know enough to be dangerous!). If I go the freenas route in the event that something goes bad what would be the best file system to use so that I could pop the drive out of the freenas box (vm) and put it in another pc running windows so I could try and run various recovery tools to get the data back. All in all its not a major problem if I lose the data just would be a bit annoying, so I'm not looking for suggestions around backing up etc. I was considering using NTFS that the drives are already formatted as but it appears that while freenas does support NTFS that its a bit buggy and not 100% reliable, anyone know if this is still true? Read that on a forum somewhere.

    Read the article

  • Cannot change PostgreSQL port

    - by Jerec TheSith
    I run Postgresql 8.4 as a service on a CentOS 6.2 server. I set port = 21444 and listen_addresses = '*' in /var/lib/pgsql/data/postgresql.conf and I changed 5432 to 21444 in postmaster.opts and restarted postgres, but when I run netstat -lntp postgresql is still running on port 5432 tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 20276/postmaster When I restart postgresql I get a writting error warning on /proc/self/oom_adj, but the service starts anyway. I read that we could get this error when using virtualized servers, but I don't really know if this has inpact on postgresql listening port. The correct pgsql config file is loaded in /var/lib/pgsql/data : [root@srv02 ~]# ps -ef | grep postgres root 1358 22140 0 09:42 pts/0 00:00:00 grep postgres postgres 9519 1 0 Mar16 ? 00:00:01 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres 9573 9519 0 Mar16 ? 00:00:00 postgres: logger process postgres 9575 9519 0 Mar16 ? 00:00:05 postgres: writer process postgres 9576 9519 0 Mar16 ? 00:00:03 postgres: wal writer process postgres 9577 9519 0 Mar16 ? 00:00:01 postgres: autovacuum launcher process postgres 9578 9519 0 Mar16 ? 00:00:01 postgres: stats collector process any thought ? thanks, Jerec

    Read the article

  • config sqlcachedependency

    - by hotyi
    i know SqlCacheDependency could push the modified data from db to the Cache, i want to know if the I could config the sqlcachedependency not push the modified data immediately, but push every 1 minute.

    Read the article

  • anonymous function variable scope [js, ajax]

    - by arthurprs
    $(".delete").click( function() { var thesender = this; $(thesender).text("Del..."); $.getJSON("ajax.php", {}, function(data) { if (data["result"]) $(thesender).remove(); // variable defined outside else alert('Error!'); } ); return false; } ); This can cause problems if user clicks on another ".delete" before the ajax callback is called?

    Read the article

  • What is the fastest way to scale and display an image in Python?

    - by Knut Eldhuset
    I am required to display a two dimensional numpy.array of int16 at 20fps or so. Using Matplotlib's imshow chokes on anything above 10fps. There obviously are some issues with scaling and interpolation. I should add that the dimensions of the array are not known, but will probably be around thirty by four hundred. These are data from a sensor that are supposed to have a real-time display, so the data has to be re-sampled on the fly.

    Read the article

  • ComboBox in Flex

    - by Ravi K Chowdary
    Hi, I have a combobox with multi selection. when i click on add button, which ever data is selected in the Combobox, those data has to be displayed in the another comboBox. Please check the code and Can anyone of you please help me on this. "/ Thanks, Ravi

    Read the article

  • Ajax request with JQuery on page unload

    - by Rob
    I'm trying to do this: $(window).unload( function () { $.ajax({ type: "POST", url: "http://localhost:8888/test.php?", data: "test", success: function(msg){ alert( "Data Saved: " + msg ); } }); alert (c); }); However, the success alert is never shown, nor does this request seem to be even hitting the server. What am I doing wrong? Thanks!

    Read the article

  • Ideas for multiplatform encrypted java mobile storage system

    - by Fernando Miguélez
    Objective I am currently designing the API for a multiplatform storage system that would offer same interface and capabilities accross following supported mobile Java Platforms: J2ME. Minimum configuration/profile CLDC 1.1/MIDP 2.0 with support for some necessary JSRs (JSR-75 for file storage). Android. No minimum platform version decided yet, but rather likely could be API level 7. Blackberry. It would use the same base source of J2ME but taking advantage of some advaced capabilities of the platform. No minimum configuration decided yet (maybe 4.6 because of 64 KB limitation for RMS on 4.5). Basically the API would sport three kind of stores: Files. These would allow standard directory/file manipulation (read/write through streams, create, mkdir, etc.). Preferences. It is a special store that handles properties accessed through keys (Similar to plain old java properties file but supporting some improvements such as different value data types such as SharedPreferences on Android platform) Local Message Queues. This store would offer basic message queue functionality. Considerations Inspired on JSR-75, all types of stores would be accessed in an uniform way by means of an URL following RFC 1738 conventions, but with custom defined prefixes (i.e. "file://" for files, "prefs://" for preferences or "queue://" for message queues). The address would refer to a virtual location that would be mapped to a physical storage object by each mobile platform implementation. Only files would allow hierarchical storage (folders) and access to external extorage memory cards (by means of a unit name, the same way as in JSR-75, but that would not change regardless of underlying platform). The other types would only support flat storage. The system should also support a secure version of all basic types. The user would indicate it by prefixing "s" to the URL (i.e. "sfile://" instead of "file://"). The API would only require one PIN (introduced only once) to access any kind of secure object types. Implementation issues For the implementation of both plaintext and encrypted stores, I would use the functionality available on the underlying platforms: Files. These are available on all platforms (J2ME only with JSR-75, but it is mandatory for our needs). The abstract File to actual File mapping is straight except for addressing issues. RMS. This type of store available on J2ME (and Blackberry) platforms is convenient for Preferences and maybe Message Queues (though depending on performance or size requirements these could be implemented by means of normal files). SharedPreferences. This type of storage, only available on Android, would match Preferences needs. SQLite databases. This could be used for message queues on Android (and maybe Blackberry). When it comes to encryption some requirements should be met: To ease the implementation it will be carried out on read/write operations basis on streams (for files), RMS Records, SharedPreferences key-value pairs, SQLite database columns. Every underlying storage object should use the same encryption key. Handling of encrypted stores should be the same as the unencrypted counterpart. The only difference (from the user point of view) accessing an encrypted store would be the addressing. The user PIN provides access to any secure storage object, but the change of it would not require to decrypt/re-encrypt all the encrypted data. Cryptographic capabilities of underlying platform should be used whenever it is possible, so we would use: J2ME: SATSA-CRYPTO if it is available (not mandatory) or lightweight BoncyCastle cryptographic framework for J2ME. Blackberry: RIM Cryptographic API or BouncyCastle Android: JCE with integraced cryptographic provider (BouncyCastle?) Doubts Having reached this point I was struck by some doubts about what solution would be more convenient, taking into account the limitation of the plataforms. These are some of my doubts: Encryption Algorithm for data. Would AES-128 be strong and fast enough? What alternatives for such scenario would you suggest? Encryption Mode. I have read about the weakness of ECB encryption versus CBC, but in this case the first would have the advantage of random access to blocks, which is interesting for seek functionality on files. What type of encryption mode would you choose instead? Is stream encryption suitable for this case? Key generation. There could be one key generated for each storage object (file, RMS RecordStore, etc.) or just use one for all the objects of the same type. The first seems "safer", though it would require some extra space on device. In your opinion what would the trade-offs of each? Key storage. For this case using a standard JKS (or PKCS#12) KeyStore file could be suited to store encryption keys, but I could also define a smaller structure (encryption-transformation / key data / checksum) that could be attached to each storage store (i.e. using addition files with the same name and special extension for plain files or embedded inside other types of objects such as RMS Record Stores). What approach would you prefer? And when it comes to using a standard KeyStore with multiple-key generation (given this is your preference), would it be better to use a record-store per storage object or just a global KeyStore keeping all keys (i.e. using the URL identifier of abstract storage object as alias)? Master key. The use of a master key seems obvious. This key should be protected by user PIN (introduced only once) and would allow access to the rest of encryption keys (they would be encrypted by means of this master key). Changing the PIN would only require to reencrypt this key and not all the encrypted data. Where would you keep it taking into account that if this got lost all data would be no further accesible? What further considerations should I take into account? Platform cryptography support. Do SATSA-CRYPTO-enabled J2ME phones really take advantage of some dedicated hardware acceleration (or other advantage I have not foreseen) and would this approach be prefered (whenever possible) over just BouncyCastle implementation? For the same reason is RIM Cryptographic API worth the license cost over BouncyCastle? Any comments, critics, further considerations or different approaches are welcome.

    Read the article

  • C/C++-Library for EEPROM wear-leveling under Linux?

    - by Martin C.
    Hi, does anybody know of a library for storing data securely in an 8k-EEPROM, which is attached over the I2C-interface? I am especially interested in wear-leveling as I have a write-intensive application where the EEPROM should/must be used as a NVRAM for often-chaning measurement data. Thanks in advance, Martin

    Read the article

  • All embedded databases fail to open connections

    - by rsteckly
    Hi, I'm working on a winforms desktop application that needs to store data. I made the really bad decision to try and embed a database. I've tried: SQLite VistaDB SQL Server Compact In each case, I was able to generate a Entity Framework Model over the basic schema I've created. I have an event that adds data that I've been using to test these databases. Well, I kept adding a new record using EF and finding it didn't actually insert a record. In debugging, I checked the context object to see what was happening. It turns out that it is saying "the underlying provider failed to open," or something to that effect. It was not throwing an exception, just not inserting a record. The same thing has happened for all 3 embedded databases--prompting me to get it through my dense head that there has to be something wrong with my configuration. Well, I tried to write some basic sql using a sqlconnection and sqlcommand. This time it throws an exception. In the SQL Server Compact case, it now says: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) I thought perhaps a problem was the path in app.Config. So I changed the connection string to: Note that I simplified the path away from anything that might have spaces and avoided using the Data Directory nonsense that causes problem when the debugging directory does not match the preconfigured value for the data directory. I'm running Windows 7; I thought perhaps it might be an access issue--so I tried running VS 2010 in Administrator mode. No luck. I also installed Sql Server Compact SP2, thinking this might be a bug. No luck. Anyway, I'm ready to pull my hair out. I'm on a tight deadline for this thing and didn't expect to spend the day trying to figure out what is going on.

    Read the article

  • The usage of Cassandra's internal keyspace "system"

    - by knorv
    The default Cassandra systems keyspace system is present in all Cassandra installations. Judging from the output of the describe keyspace command the keyspace it is used partly for "persistent metadata for the local node" (LocationInfo) and partly for "hinted handoff data". What persistent metadata for the local node is stored in system/LocationInfo? What is the definition of hinted handoff in Cassandra terminology? What hinted handoff data is stored in the system keyspace?

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

  • something like INotifyCollectionChanged fires on xml file changed

    - by netmajor
    It's possible to implement INotifyCollectionChanged or other interface like IObservable to enable to bind filtered data from xml file on this file changed ? I see examples with properties or collection, but what with files changes ? I have that code to filter and bind xml data to list box: XmlDocument channelsDoc = new XmlDocument(); channelsDoc.Load("RssChannels.xml"); XmlNodeList channelsList = channelsDoc.GetElementsByTagName("channel"); this.RssChannelsListBox.DataContext = channelsList;

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Help with iphone dev - beyond bounds error

    - by dusk
    I'm just learning iphone development, so please forgive me for what is probably a beginner error. I've searched around and haven't found anything specific to my problem, so hopefully it is an easy fix. The book has examples of building a table from data hard coded via an array. Unfortunately it never really tells you how to get data from a URL, so I've had to look that up and that is where my problems show up. When I debug in xcode, it seems like the count is correct, so I don't understand why it is going out of bounds? This is the error message: 2010-05-03 12:50:42.705 Simple Table[3310:20b] *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray objectAtIndex:]: index (1) beyond bounds (0)' My URL returns the following string: first,second,third,fourth And here is the iphone code with the book's working example commented out #import "Simple_TableViewController.h" @implementation Simple_TableViewController @synthesize listData; - (void)viewDidLoad { /* //working code from book NSArray *array = [[NSArray alloc] initWithObjects:@"Sleepy", @"Sneezy", @"Bashful", @"Bashful", @"Happy", @"Doc", @"Grumpy", @"Thorin", @"Dorin", @"Norin", @"Ori", @"Balin", @"Dwalin", @"Fili", @"Kili", @"Oin", @"Gloin", @"Bifur", @"Bofur", @"Bombur", nil]; self.listData = array; [array release]; */ //code from interwebz that crashes NSString *urlstr = [[NSString alloc] initWithFormat:@"http://www.mysite.com/folder/iphone-test.php"]; NSURL *url = [[NSURL alloc] initWithString:urlstr]; NSString *ans = [NSString stringWithContentsOfURL:url]; NSArray *listItems = [ans componentsSeparatedByString:@","]; self.listData = listItems; [urlstr release]; [url release]; [ans release]; [listItems release]; [super viewDidLoad]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; self.listData = nil; [super viewDidUnload]; } - (void)dealloc { [listData release]; [super dealloc]; } #pragma mark - #pragma mark Table View Data Source Methods - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [self.listData count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *SimpleTableIdentifier = @"SimpleTableIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:SimpleTableIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:SimpleTableIdentifier] autorelease]; } NSUInteger row = [indexPath row]; cell.textLabel.text = [listData objectAtIndex:row]; return cell; } @end

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • When using delegates, need better way to do sequential processing

    - by Padawan
    I have a class WebServiceCaller that uses NSURLConnection to make asynchronous calls to a web service. The class provides a delegate property and when the web service call is done, it calls a method webServiceDoneWithXXX on the delegate. There are several web service methods that can be called, two of which are say GetSummary and GetList. The classes that use WebServiceCaller initially need both the summary and list so they are written like this: -(void)getAllData { [webServiceCaller getSummary]; } -(void)webServiceDoneWithGetSummary { [webServiceCaller getList]; } -(void)webServiceDoneWithGetList { ... } This works but there are at least two problems: The calls are split across delegate methods so it's hard to see the sequence at a glance but more important it's hard to control or modify the sequence. Sometimes I want to call just GetSummary and not also GetList so I would then have to use an ugly class-level state variable that tells webServiceDoneWithGetSummary whether to call GetList or not. Assume that GetList cannot be done until GetSummary completes and returns some data which is used as input to GetList. Is there a better way to handle this and still get asynchronous calls? Update based on Matt Long's answer: Using notifications instead of a delegate, it looks like I can solve problem #2 by setting a different selector depending on whether I want the full sequence (GetSummary+GetList) or just GetSummary. Both observers would still use the same notification name when calling GetSummary. I would have to write two separate methods to handle GetSummaryDone instead of using a single delegate method (where I would have needed some class-level variable to tell whether to then call GetList). -(void)getAllData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getSummaryDoneAndCallGetList:)                  name:kGetSummaryDidFinish object:nil];     [webServiceCaller getSummary]; } -(void)getSummaryDoneAndCallGetList { [NSNotificationCenter removeObserver] //process summary data [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getListDone:)                  name:kGetListDidFinish object:nil];     [webServiceCaller getList]; } -(void)getListDone { [NSNotificationCenter removeObserver] //process list data } -(void)getJustSummaryData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getJustSummaryDone:) //different selector but                  name:kGetSummaryDidFinish object:nil]; //same notification name     [webServiceCaller getSummary]; } -(void)getJustSummaryDone { [NSNotificationCenter removeObserver] //process summary data } I haven't actually tried this yet. It seems better than having state variables and if-then statements but you have to write more methods. I still don't see a solution for problem 1.

    Read the article

< Previous Page | 879 880 881 882 883 884 885 886 887 888 889 890  | Next Page >