Search Results

Search found 5295 results on 212 pages for 'transaction scope'.

Page 76/212 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to set Atomikos to not write to console logs?

    - by peter
    Atomikos is quite verbose when used. There seems to be lots of INFO messages (mostly irrelevant for me) that the transaction manager writes out to the console. The setting in the transaction.properties that is suppose to control the level of messaging com.atomikos.icatch.console_log_level does not seem to have any effect, since even when set to WARN (or ERROR) the INFO messages are still logged. Also the log4j settings for com.atomikos and atomikos seem to be ignored. Does anyone manage to turn off the INFO logs on the console with Atomikos?. How? Thanks Peter

    Read the article

  • Clojure / HBase: How to Import HBaseTestingUtility in v0.94.6.1

    - by David Williams
    In Clojure, if I want to start a test cluster using the hbase testing utility, I have to annotate my dependencies with: [org.apache.hbase/hbase "0.92.2" :classifier "tests" :scope "test"] First of all, I have no idea what this means. According to leiningens sample project.clj ;; Dependencies are listed as [group-id/name version]; in addition ;; to keywords supported by Pomegranate, you can use :native-prefix ;; to specify a prefix. This prefix is used to extract natives in ;; jars that don't adhere to the default "<os>/<arch>/" layout that ;; Leiningen expects. Question 1: What does that mean? Question 2: If I upgrade the version: [org.apache.hbase/hbase "0.94.6.1" :classifier "tests" :scope "test"] Then I receive a ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration Whats going on here and how do I fix it?

    Read the article

  • Restore Partioned database into multiple filegroups

    - by Renju
    does anyone have any query to restore partioned db that having multiple file groups,In the restore option in the SSME i need to edit manually all the path of the filegroups restore as option it little bit tedious as it having more than 150 filegroups eg:USE master GO -- First determine the number and names of the files in the backup. RESTORE FILELISTONLY FROM MyNwind_1 -- Restore the files for MyNwind. RESTORE DATABASE MyNwind FROM MyNwind_1 WITH NORECOVERY, MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' GO -- Apply the first transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log1 WITH NORECOVERY GO -- Apply the last transaction log backup. RESTORE LOG MyNwind FROM MyNwind_log2 WITH RECOVERY GO Here i need to specify multiple MOVE command for all my filegroups,this is a tedious task when having more than 100s of filegroups MOVE 'MyNwind_data_1' TO 'D:\MyData\MyNwind_data_1.mdf', MOVE 'MyNwind_data_2' TO 'D:\MyData\MyNwind_data_2.ndf' I need to move the files into the path i provided as a parameter.Please help. Regards Renju http://blog.renjucool.com

    Read the article

  • How to efficiently SELECT rows from database table based on selected set of values

    - by Chau Chee Yang
    I have a transaction table of 1 million rows. The table has a field name "Code" to keep customer's ID. There are about 10,000 different customer code. I have an GUI interface allow user to render a report from transaction table. User may select arbitrary number of customers for rendering. I use IN operator first and it works for few customers: SELECT * FROM TRANS_TABLE WHERE CODE IN ('...', '...', '...') I quickly run into problem if I select few thousand customers. There is limitation using IN operator. An alternate way is create a temporary table with only one field of CODE, and inject selected customer codes into the temporary table using INSERT statement. I may then using SELECT A.* FROM TRANS_TABLE A INNER JOIN TEMP B ON (A.CODE=B.CODE) This works nice for huge selection. However, there is performance overhead for temporary table creation, INSERT injection and dropping of temporary table. Do you aware of better solution to handle this situation?

    Read the article

  • Can't insert a record in a oracle database using C#

    - by Gya
    try { int val4 = Convert.ToInt32(tbGrupa.Text); string MyConString = "Data Source=**;User ID=******;Password=*****"; OracleConnection conexiune = new OracleConnection(MyConString); OracleCommand comanda = new OracleCommand(); comanda.Connection = conexiune; conexiune.Open(); comanda.Transaction = conexiune.BeginTransaction(); int id_stud = Convert.ToInt16(tbCodStud.Text); string nume = tbNume.Text; string prenume = tbPrenume.Text; string initiala_tatalui = tbInitiala.Text; string email = tbEmail.Text; string facultate = tbFac.Text; int grupa = Convert.ToInt16(tbGrupa.Text); string serie = tbSeria.Text; string forma_de_inv = tbFormaInvatamant.Text; DateTime data_acceptare_coordonare = dateTimePicker1.Value; DateTime data_sustinere_licenta = dateTimePicker2.Value; string sustinere = tbSustinereLicenta.Text; string parola_acces = tbParola.Text; try { comanda.Parameters.AddWithValue("id_stud", id_stud); comanda.Parameters.AddWithValue("nume", nume); comanda.Parameters.AddWithValue("prenume", prenume); comanda.Parameters.AddWithValue("initiala_tatalui", initiala_tatalui); comanda.Parameters.AddWithValue("facultate", facultate); comanda.Parameters.AddWithValue("email", email); comanda.Parameters.AddWithValue("seria", serie); comanda.Parameters.AddWithValue("grupa", grupa); comanda.Parameters.AddWithValue("forma_de_inv", forma_de_inv); comanda.Parameters.AddWithValue("data_acceptare_coordonare", data_acceptare_coordonare); comanda.Parameters.AddWithValue("data_sustinere_licenta", data_sustinere_licenta); comanda.Parameters.AddWithValue("sustinere_licenta", sustinere); comanda.Parameters.AddWithValue("parola_acces", parola_acces); comanda.Transaction.Commit(); MessageBox.Show("Studentul " + tbNume.Text + " " + tbPrenume.Text + " a fost adaugat în baza de date!"); } catch (Exception er) { comanda.Transaction.Rollback(); MessageBox.Show("ER1.1:" + er.Message); MessageBox.Show("ER1.2:" + er.StackTrace); } finally { conexiune.Close(); } } catch (Exception ex) { MessageBox.Show("ER2.1:"+ex.Message); MessageBox.Show("ER2.2:"+ex.StackTrace); }

    Read the article

  • java looping - declaration of a Class outside / inside the loop

    - by lisak
    when looping, for instance: for ( int j = 0; j < 1000; j++) {}; and I need to instantiate 1000 objects, how does it differ when I declare the object inside the loop from declaring it outside the loop ?? for ( int j = 0; j < 1000; j++) {Object obj; obj =} vs Object obj; for ( int j = 0; j < 1000; j++) {obj =} It's obvious that the object is accessible either only from the loop scope or from the scope that is surrounding it. But I don't understand the performance question, garbage collection etc. What is the best practice ? Thank you

    Read the article

  • Does Ivy's url resolver support transitive retrieval?

    - by Sean
    For some reason I can't seem to resolve the dependencies of my dependencies when using a url resolver to specify a repository's location. However, when using the ibiblio resolver, I am able to retrieve them. For example: <!-- Ivy File --> <ivy-module version="1.0"> <info organisation="org.apache" module="chained-resolvers"/> <dependencies> <dependency org="commons-lang" name="commons-lang" rev="2.0" conf="default"/> <dependency org="checkstyle" name="checkstyle" rev="5.0"/> </dependencies> </ivy-module> <!-- ivysettings file --> <ivysettings> <settings defaultResolver="chained"/> <resolvers> <chain name="chained"> <url name="custom-repo"> <ivy pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/ivy-[revision].xml"/> <artifact pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/> </url> <url name="ibiblio-mirror" m2compatible="true"> <artifact pattern="http://mirrors.ibiblio.org/pub/mirrors/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="ibiblio" m2compatible="true"/> </chain> </resolvers> </ivysettings> <!-- checkstyle ivy.xml file generated from pom via ivy:install task --> <?xml version="1.0" encoding="UTF-8"?> <ivy-module version="1.0" xmlns:m="http://ant.apache.org/ivy/maven"> <info organisation="checkstyle" module="checkstyle" revision="5.0" status="release" publication="20090509202448" namespace="maven2" > <license name="GNU Lesser General Public License" url="http://www.gnu.org/licenses/lgpl.txt" /> <description homepage="http://checkstyle.sourceforge.net/"> Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard </description> </info> <configurations> <conf name="default" visibility="public" description="runtime dependencies and master artifact can be used with this conf" extends="runtime,master"/> <conf name="master" visibility="public" description="contains only the artifact published by this module itself, with no transitive dependencies"/> <conf name="compile" visibility="public" description="this is the default scope, used if none is specified. Compile dependencies are available in all classpaths."/> <conf name="provided" visibility="public" description="this is much like compile, but indicates you expect the JDK or a container to provide it. It is only available on the compilation classpath, and is not transitive."/> <conf name="runtime" visibility="public" description="this scope indicates that the dependency is not required for compilation, but is for execution. It is in the runtime and test classpaths, but not the compile classpath." extends="compile"/> <conf name="test" visibility="private" description="this scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases." extends="runtime"/> <conf name="system" visibility="public" description="this scope is similar to provided except that you have to provide the JAR which contains it explicitly. The artifact is always available and is not looked up in a repository."/> <conf name="sources" visibility="public" description="this configuration contains the source artifact of this module, if any."/> <conf name="javadoc" visibility="public" description="this configuration contains the javadoc artifact of this module, if any."/> <conf name="optional" visibility="public" description="contains all optional dependencies"/> </configurations> <publications> <artifact name="checkstyle" type="jar" ext="jar" conf="master"/> </publications> <dependencies> <dependency org="antlr" name="antlr" rev="2.7.6" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-beanutils-core" rev="1.7.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-cli" rev="1.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-logging" rev="1.0.3" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="com.google.collections" name="google-collections" rev="0.9" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> </dependencies> </ivy-module> Using the "ibiblio" resolver I have no problem resolving my project's two dependencies (commons-lang 2.0 and checkstyle 5.0) and checkstyle's dependencies. However, when attempting to exclusively use the "custom-repo" or "ibiblio-mirror" resolvers, I am able to resolve my project's two explicitly defined dependencies, but not checkstyle's dependencies. Is this possible? Any help would be greatly appreciated.

    Read the article

  • How to migrate large amounts of data from old database to new

    - by adam0101
    I need to move a huge amount of data from a couple tables in an old database to a couple different tables in a new database. The databases are SQL Server 2005 and are on the same box and sql server instance. I was told that if I try to do it all in one shot that the transaction log would fill up. Is there a way to disable the transaction log per table? If not, what is a good method for doing this? Would a cursor do it? This is just a one-time conversion.

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • How does SQL Server treat statements inside stored procedures with respect to transactions?

    - by Sleepless
    Hi All! Say I have a stored procedure consisting of several seperate SELECT, INSERT, UPDATE and DELETE statements. There is no explicit BEGIN TRANS / COMMIT TRANS / ROLLBACK TRANS logic. How will SQL Server handle this stored procedure transaction-wise? Will there be an implicit connection for each statement? Or will there be one transaction for the stored procedure? Also, how could I have found this out on my own using T-SQL and / or SQL Server Management Studio? Thanks!

    Read the article

  • Generate non-identity primary key

    - by MikeWyatt
    My workplace doesn't use identity columns or GUIDs for primary keys. Instead, we retrieve "next IDs" from a table as needed, and increment the value for each insert. Unfortunatly for me, LINQ-TO-SQL appears to be optimized around using identity columns. So I need to query and update the "NextId" table whenever I perform an insert. For simplicity, I do this immediately creating the new object. Since all operations between creation of the data context and the call to SubmitChanges are part of one transaction, do I need to create a separate data context for retrieving next IDs? Each time I need an ID, I need to query and update a table inside a transaction to prevent multiple apps from grabbing the same value. Is a separate data context the only way, or is there something better I could try?

    Read the article

  • With EJB 2.1, is declaring references to resources in ejb-jar.xml required?

    - by zwerd328
    I'm using Weblogic 9.2 with a lot of MDBs. These MDBs access JDBC DataSources and write to both locally and externally managed JMS Destinations using local and foreign XAConnectionFactorys, respectively. Each MDB demarcates a container-managed JTA transaction that should be distributed amongst all of these resources. Below is an excerpt from my ejb-jar.xml for an MDB that consumes from a local Queue called "MyDestination" and produces to an IBM Websphere MQ Queue called "MyOtherDestination". These logical names are linked to physical objects in my weblogic-ejb-jar.xml file. Is it required to use the <resource-ref> and <message-destination-ref> tags to expose the ConnectionFactory and Queue to the MDB? If so, is it required by Weblogic or is it required by the J2EE spec? And for what purpose? For example, is it required to support XA transactionality? I'm already aware of the benefit of decoupling the administered objects from my MDB using names exposed to the naming context of the MDB. Is this the only value added when specifying these tags? In other words, is it acceptable to just reference these objects from my MDB using the InitialContext and the objects' fully-qualified names? <enterprise-bean> <message-driven> <ejb-name>MyMDB</ejb-name> <ejb-class>com.mycompany.MyMessageDrivenBean</ejb-class> <transaction-type>Container</transaction-type> <message-destination-type>javax.jms.Queue</message-destination> <message-destination-link>MyDestination</message-destination-link> <resource-ref> <res-ref-name>jms/myQCF</res-ref-name> <res-type>javax.jms.XAConnectionFactory</res-type> <res-auth>Container</res-auth> </resource-ref> <message-destination-ref> <message-destination-ref-name>jms/myOtherDestination</message-destination-ref-name> <message-destination-type>javax.jms.Queue</message-destination-type> <message-destination-usage>Produces</message-destination-usage> <message-destination-link>MyOtherDestination</message-destination-link> </message-destination-ref> </message-driven> <enterprise-bean>

    Read the article

  • Does a C# using statement perform try/finally?

    - by Lirik
    Suppose that I have the following code: private void UpdateDB(QuoteDataSet dataSet, Strint tableName) { using(SQLiteConnection conn = new SQLiteConnection(_connectionString)) { conn.Open(); using (SQLiteTransaction transaction = conn.BeginTransaction()) { using (SQLiteCommand cmd = new SQLiteCommand("SELECT * FROM " + tableName, conn)) { using (SQLiteDataAdapter sqliteAdapter = new SQLiteDataAdapter()) { sqliteAdapter.Update(dataSet, tableName); } } transaction.Commit(); } } } The C# documentation states that with a using statement the object within the scope will be disposed and I've seen several places where it's suggested that we don't need to use try/finally clause. I usually surround my connections with a try/finally, and I always close the connection in the finally clause. Given the above code, is it reasonable to assume that the connection will be closed if there is an exception?

    Read the article

  • iPhone app crashed: Assertion failed function evict_glyph_entry_from_strike, file Fonts/CGFontCache.

    - by Ross
    this happened quite randomly. I didn't delete any tableview cell, the backtrace information: Assertion failed: (d->entry[identifier.glyph] == g), function evict_glyph_entry_from_strike, file Fonts/CGFontCache.c, line 810. Program received signal: “SIGABRT”. (gdb) bt #0 0x97da5972 in __kill () #1 0x97da5964 in kill$UNIX2003 () #2 0x97e38ba5 in raise () #3 0x97e4ec5c in abort () #4 0x97e3b804 in __assert_rtn () #5 0x0037fe0e in evict_glyph_entry_from_cache () #6 0x003226aa in expire_glyphs_nl () #7 0x00322645 in CGFontCacheUnlock () #8 0x00321fef in CGGlyphLockUnlock () #9 0x0240f9b7 in ripc_DrawGlyphs () #10 0x0031b0d4 in draw_glyphs () #11 0x0031a91f in CGContextShowGlyphsWithAdvances () #12 0x35814178 in WebCore::Font::drawGlyphs () #13 0x35813da5 in WebCore::Font::drawGlyphBuffer () #14 0x35813aca in WebCore::Font::drawSimpleText () #15 0x35813760 in drawAtPoint () #16 0x3581307e in -[NSString(WebStringDrawing) _web_drawAtPoint:forWidth:withFont:ellipsis:letterSpacing:includeEmoji:] () #17 0x3090d2e9 in -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:letterSpacing:includeEmoji:] () #18 0x3090cfe3 in -[NSString(UIStringDrawing) drawAtPoint:forWidth:withFont:lineBreakMode:] () #19 0x3093d853 in -[UINavigationItemView drawText:inRect:] () #20 0x3093a96b in -[UINavigationItemButtonView drawRect:] () #21 0x3091ff61 in -[UIView(CALayerDelegate) drawLayer:inContext:] () #22 0x0060daeb in -[CALayer drawInContext:] () #23 0x0060d8f9 in backing_callback () #24 0x0060d1b4 in CABackingStoreUpdate () #25 0x0060c3cc in -[CALayer _display] () #26 0x0060bf56 in CALayerDisplayIfNeeded () #27 0x0060b3bd in CA::Context::commit_transaction () #28 0x0060b022 in CA::Transaction::commit () #29 0x006132e0 in CA::Transaction::observer_callback () #30 0x30245c32 in __CFRunLoopDoObservers () #31 0x3024503f in CFRunLoopRunSpecific () #32 0x30244628 in CFRunLoopRunInMode () #33 0x32044c31 in GSEventRunModal () #34 0x32044cf6 in GSEventRun () #35 0x309021ee in UIApplicationMain ()

    Read the article

  • Websphere Scheduler

    - by Dileep81
    Websphere Scheduler is using scheduler datasource XA driver . When task is executed by scheduler it is starting a global transaction, but in our application we are creating a new connection to another database and explicitly commiting the data and closing the connection. This data source configured using non-XA driver datasource. For the application we have also enabled the Accept heuristic hazard (Last participant support extension) . Now while running the scheudler we are getting the exception DSRA9350E: Operation Connection.commit is not allowed during a global transaction . Can any one help me out in this

    Read the article

  • JPA Entity Manager resource handling

    - by chiragshahkapadia
    Every time I call JPA method its creating entity and binding query. My persistence properties are: <property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect"/> <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.SingletonEhCacheProvider"/> <property name="hibernate.cache.use_second_level_cache" value="true"/> <property name="hibernate.cache.use_query_cache" value="true"/> And I am creating entity manager the way shown below: emf = Persistence.createEntityManagerFactory("pu"); em = emf.createEntityManager(); em = Persistence.createEntityManagerFactory("pu").createEntityManager(); Is there any nice way to manage entity manager resource instead create new every time or any property can set in persistence. Remember it's JPA. See below binding log every time : 15:35:15,527 INFO [AnnotationBinder] Binding entity from annotated class: * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: * = * 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [QueryBinder] Binding Named query: 15:35:15,527 INFO [EntityBinder] Bind entity com.* on table * 15:35:15,542 INFO [HibernateSearchEventListenerRegister] Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. 15:35:15,542 INFO [NamingHelper] JNDI InitialContext properties:{} 15:35:15,542 INFO [DatasourceConnectionProvider] Using datasource: 15:35:15,542 INFO [SettingsFactory] RDBMS: and Real Application Testing options 15:35:15,542 INFO [SettingsFactory] JDBC driver: Oracle JDBC driver, version: 9.2.0.1.0 15:35:15,542 INFO [Dialect] Using dialect: org.hibernate.dialect.Oracle10gDialect 15:35:15,542 INFO [TransactionFactoryFactory] Transaction strategy: org.hibernate.transaction.JDBCTransactionFactory 15:35:15,542 INFO [TransactionManagerLookupFactory] No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recomm ended) 15:35:15,542 INFO [SettingsFactory] Automatic flush during beforeCompletion(): disabled 15:35:15,542 INFO [SettingsFactory] Automatic session close at end of transaction: disabled 15:35:15,542 INFO [SettingsFactory] JDBC batch size: 15 15:35:15,542 INFO [SettingsFactory] JDBC batch updates for versioned data: disabled 15:35:15,542 INFO [SettingsFactory] Scrollable result sets: enabled 15:35:15,542 INFO [SettingsFactory] JDBC3 getGeneratedKeys(): disabled 15:35:15,542 INFO [SettingsFactory] Connection release mode: auto 15:35:15,542 INFO [SettingsFactory] Default batch fetch size: 1 15:35:15,542 INFO [SettingsFactory] Generate SQL with comments: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL updates by primary key: disabled 15:35:15,542 INFO [SettingsFactory] Order SQL inserts for batching: disabled 15:35:15,542 INFO [SettingsFactory] Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 15:35:15,542 INFO [ASTQueryTranslatorFactory] Using ASTQueryTranslatorFactory 15:35:15,542 INFO [SettingsFactory] Query language substitutions: {} 15:35:15,542 INFO [SettingsFactory] JPA-QL strict compliance: enabled 15:35:15,542 INFO [SettingsFactory] Second-level cache: enabled 15:35:15,542 INFO [SettingsFactory] Query cache: enabled 15:35:15,542 INFO [SettingsFactory] Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 15:35:15,542 INFO [RegionFactoryCacheProviderBridge] Cache provider: net.sf.ehcache.hibernate.SingletonEhCacheProvider 15:35:15,542 INFO [SettingsFactory] Optimize cache for minimal puts: disabled 15:35:15,542 INFO [SettingsFactory] Structured second-level cache entries: disabled 15:35:15,542 INFO [SettingsFactory] Query cache factory: org.hibernate.cache.StandardQueryCacheFactory 15:35:15,542 INFO [SettingsFactory] Statistics: disabled 15:35:15,542 INFO [SettingsFactory] Deleted entity synthetic identifier rollback: disabled 15:35:15,542 INFO [SettingsFactory] Default entity-mode: pojo 15:35:15,542 INFO [SettingsFactory] Named query checking : enabled 15:35:15,542 INFO [SessionFactoryImpl] building session factory 15:35:15,542 INFO [SessionFactoryObjectFactory] Not binding factory to JNDI, no JNDI name configured 15:35:15,542 INFO [UpdateTimestampsCache] starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache 15:35:15,542 INFO [StandardQueryCache] starting query cache at region: org.hibernate.cache.StandardQueryCache

    Read the article

  • add method to reflection-object and named-scopes

    - by toy
    I Like to add a method to my has_many relation in the way that it is applyed on the relation object. I got an Order wich :has_many line_items I like to write things like order.line_items.calculate_total # returns the sum of line_items this I could do with: :has_many line_items do def calculate_total ... end end but this would not be applyed to named_scopes like payalbes_only: order.line_items.payables_only.calculate_total here calculate total would receive all line_items of order and not the scoped ones from payables_only-scope. My log tells me that the paybles_only scope is even not applied to the sql.

    Read the article

  • in app purchases question

    - by bdt
    I am looking into the iPhone in app purchased models and need to implement a subscription. Ex content will be available for 24 hours. Now the most important thing is that it needs to be available on all the other devices, so bought on the iPhone, viewable on the iPad. I'm not sure how this works ? I need to store some information on the developer server but is this the transaction id and current date/time so when launching the app on the iPad, you will attempt to buy the content again. Apple will see that this user already bought that and hopefully returns the transaction ID. At that moment I can verify if the time limit it still valid or not. Can anyone confirm this method off working? If this is correct is there a 'renew'?

    Read the article

  • Facebook redirect after login not working when url is dynamic

    - by dythffvrb
    This is my code: $loginUrl = $facebook->getLoginUrl(array( 'scope' => 'publish_actions', 'redirect_uri' => 'http://mysite.com/', )); It works but if I remove redirect_uri it doesn't work anymore. $loginUrl = $facebook->getLoginUrl(array( 'scope' => 'publish_actions' )); According to Facebook domcumentation redirect_uri is optional. https://developers.facebook.com/docs/reference/php/facebook-getLoginUrl/ Im trying to redirect the users to the same url they were on before logging in. Update: This problem occurs when the url is mysite.com/post23 but when url is mysite.com/staticpage or mysite.com there are no problems Any workarounds? EDIT: It looks like a bug, it doesn't work with certain url in the same site I wil try and report it to Facebook.

    Read the article

  • Boost signals and passing class method

    - by Ockonal
    Hello, I've defined some signal: typedef boost::signals2::signal<void (int temp)> SomeSig; typedef SomeSig::slot_type SomeSigType; I have some class: class SomeClass { SomeClass() { SomeSig.connect(&SomeClass::doMethod); } void doMethod(const SomeSig &slot); }; And got a lot of errors: error: ‘BOOST_PP_ENUM_SHIFTED_PARAMS_M’ was not declared in this scope error: ‘T’ was not declared in this scope error: a function call cannot appear in a constant-expression error: a function call cannot appear in a constant-expression error: template argument 1 is invalid error: ‘BOOST_SIGNALS2_MISC_STATEMENT’ has not been declared error: expected identifier before ‘~’ token error: expected ‘)’ before ‘~’ token error: expected ‘;’ before ‘~’ token

    Read the article

  • session state of checkbox list

    - by xrx215
    Can you please help me in storing the checkbox list items in session. I have a checkbox list as follows asp:CheckBoxList ID="cblScope" runat="server" onselectedindexchanged="cblScope_SelectedIndexChanged"> asp:ListItem ID="liInScope" runat="server" Value="true">In Scope (Monitored)</asp:ListItem> <asp:ListItem ID="liOutOfScope" runat="server" Value="true">Out of Scope (Unmonitored)</asp:ListItem> /asp:CheckBoxList> I have to store the value of the checkbox in session when they are cheked.

    Read the article

  • Besides EAR and EJB, what do I get from a J2EE app server that I don't get in a servlet container li

    - by dacracot
    We use Tomcat to host our WAR based applications. We are servlet container compliant J2EE applications with the exception of org.apache.catalina.authenticator.SingleSignOn. We are being asked to move to a commercial J2EE application server. The first downside to changing that I see is the cost. No matter what the charges for the application server, Tomcat is free. Second is the complexity. We don't use either EJB nor EAR features (of course not, we can't), and have not missed them. What then are the benefits I'm not seeing? What are the drawbacks that I haven't mentioned? Mentioned were... JTA - Java Transaction API - We control transaction via database stored procedures. JPA - Java Persistence API - We use JDBC and again stored procedures to persist. JMS - Java Message Service - We use XML over HTTP for messaging. This is good, please more!

    Read the article

  • How to change order in ordered+persisted collection?

    - by Jaroslav Záruba
    I just need to change order of items in a (previously persisted) ordered collection... I tried simply passing the re-arranged collection to a setter: after committing a transaction the collection is gone. Then I tried to clear() the existing collection and addAll() afterwards: clear() makes persistent manager to mark all the elements as deleted. (But obviously I would like to be able to work with the collection items in the very transaction.) (The collection is not in a default fetch group, so I tried the above also with the named fetch group added into the fetch plan. No luck.) This must be the most stupid question, but I ran out of ideas and I'm stuck here two days already. I swear I googled. :(

    Read the article

  • How do i keep my DB and lucene in sync?

    - by acidzombie24
    So i can have a transaction in sql. But i am sure its not a good idea to wait in the middle of a transaction for lucene to finish also i am unsure if lucene is permanently saved in the DB until i do something there. Whats the best way to keep my DB and lucene in sync? I am thinking of adding a lucene_queue in my sql db and everytime i make a change i add it into the queue (removing older queue if any) and delete it once it is done. Is this the best way? Also i am unsure how to make lucene permanently keep the changes i made and how frequent i can/should do it.

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >