Search Results

Search found 415 results on 17 pages for 'transactional'.

Page 9/17 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • Inserting static current time in Excel

    - by Mike Cole
    I have a time log spreadsheet. I have a new sheet for each day. In each sheet, I have a transactional record of how my time was spent. When I start or end a task, I usually type in the time ("11:00 AM" for example). Is there a shortcut to inserting the current time into a field? I'm sure it can be done with a macro, but I'm not very knowledgeable about macros. I'd like to simply highlight a field and hit some sort of shortcut key to insert a static value of the current time. Thanks for any help!

    Read the article

  • Uses of persistent data structures in non-functional languages

    - by Ray Toal
    Languages that are purely functional or near-purely functional benefit from persistent data structures because they are immutable and fit well with the stateless style of functional programming. But from time to time we see libraries of persistent data structures for (state-based, OOP) languages like Java. A claim often heard in favor of persistent data structures is that because they are immutable, they are thread-safe. However, the reason that persistent data structures are thread-safe is that if one thread were to "add" an element to a persistent collection, the operation returns a new collection like the original but with the element added. Other threads therefore see the original collection. The two collections share a lot of internal state, of course -- that's why these persistent structures are efficient. But since different threads see different states of data, it would seem that persistent data structures are not in themselves sufficient to handle scenarios where one thread makes a change that is visible to other threads. For this, it seems we must use devices such as atoms, references, software transactional memory, or even classic locks and synchronization mechanisms. Why then, is the immutability of PDSs touted as something beneficial for "thread safety"? Are there any real examples where PDSs help in synchronization, or solving concurrency problems? Or are PDSs simply a way to provide a stateless interface to an object in support of a functional programming style?

    Read the article

  • Which method of SQL Server 2005 or 2008 Replication is best for ease of field changes?

    - by Rick
    We need 15 minute warm updates from one SQL Server to another. Log Shipping looks good and appears easy to setup. We are also looking into Transactional Replication. The data only needs to copy one way. We have two main requirements: 1) The destination database needs to be a max 15 minute old copy of the source. It needs to re-try and get up-to-date if a network cable is unplugged for a while. 2) We would really like table (fields added or modified) changes in the source as easy as possible. Thanks in advance for all suggestions.

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • What applications is NTFS preferable for? [closed]

    - by javano
    When building a new server I prefer to deploy Linux as my OS of choice. This gives me the luxury of being able to choose from various file systems (amongst other aspects), and I will choose a different FS for different servers, depending on what they will be used for. With Windows OS variants you can only use NTFS. Have any benchmarks or tests been performed that have shown NTFS to be a preferable choice for a given scenario or application (apart from just "running Windows" because it has to be on NTFS). To clarify what I mean; I might use filesystem X for large transactional storage volumes, but filesystem Y for front end web app servers. If I had a multi-platform application to deploy that (let's pretend) was available on Mac/Win/Lin, is there any type of application or scenario that would benefit from being on NTFS?

    Read the article

  • TFS SQL Deployment Data Script

    - by Greg
    We are using TFS and SQL 2005 (looking to upgrade to SQL 2012 if that makes a difference). We store our database schema in a Visual Studio Database project (VS 2010). When code is released to live we currently use the Visual Studio Database Project to build a script for all our schema changes. The problem we have been getting is having to alter or add to that script to add/fix data for the deployment. For example if we add a new non-nullable column to an existing table we need to populate that column with data during the insert. Other times we may want to create new records in transactional tables (e.g. assign specific users to a new security access). Do Visual Studio Database Projects have a way to store these scripts that only need to be run once and somehow include them in the build? Does it know which scripts need to be run (for example if we are inserting default data we don't want to do that again a second time)? OR Is there a better way to manage these scripts?

    Read the article

  • Could SQL Server 2008 replication be used with NLB to allow unlimited scaling of reporting servers?

    - by John Keranos
    We are currently using transactional replication in SQL Server 2008 to keep a secondary reporting server synchronized with a primary database server. This has been working weel and keeps some of the load off the primary server. Would it be possible to scale this solution to multiple reporting servers? We're expecting an increased load of read-only queries and it would be nice to be able to add reporting servers as needed. The general idea was the following: Each reporting server would use a "pull" subscription to get the data from the primary database publication. These reporting databases could be a couple of minutes behind the primary server without it being an issue. The reporting servers would be NLB'd together. All read-only queries would be directed to the NLB which should spread the load across the servers.

    Read the article

  • Which SQL Server edition?

    - by StaringSkyward
    We need a new install of windows server and sql server to replicate a couple of databases to a geographically separate location from an existing application (over a site-to-site VPN). The source database is SQL Server 2005. However, this is a temporary solution since the client is aiming to implement a different system entirely, so we are looking to find the minimum specification of both windows server and sql server to do this. We are finding the SQL server features per edition and licensing a little difficult to understand, hence the question. Am I correct in thinking that we can replicate data using transactional replication from SQL Server 2005 to 2008 web edition and we can install sql server web edition on windows 2008 web edition also? Thanks.

    Read the article

  • MSSQL 2005 Snapshot Agent Uses 100% CPU

    - by matth1jd
    When setting up a new subscription to a publication (transactional replication) from 64-bit SQL Server 2005 to 64-bit SQL Server 2005 the Snapshot Agent on the publisher consumes 100% of the CPU. I am using SSMS to create the new subscription. My initial impression is that this could be from row locking occurring during the generation of the snapshot but I have read that a concurrent snapshot is generated by default in SQL Server 2005, and that row locking shouldn't be a concern. As this is a production server I would like to be able to initialize replication without bringing the box to it's knees. Any suggestions would be helpful and appreciated.

    Read the article

  • Using Spring as a JPA Container

    - by sdoca
    Hi, I found this article which talks about using Spring as a JPA container: http://java.sys-con.com/node/366275 I have never used Spring before this and am trying to make this work and hope someone can help me. In the article it states that you need to annotate a Spring bean with @Transactional and methods/fields with @PersistenceContext in order to provide transaction support and to inject an entity manager. Is there something the defines a bean as a "Spring Bean"? I have a bean class which implements CRUD operations on entities using generics: @Transactional public class GenericCrudServiceBean implements GenericCrudService { @PersistenceContext(unitName="MyData") private EntityManager em; @Override @PersistenceContext public <T> T create(T t) { em.persist(t); return t; } @Override @PersistenceContext public <T> void delete(T t) { t = em.merge(t); em.remove(t); } ... ... ... @Override @PersistenceContext public List<?> findWithNamedQuery(String queryName) { return em.createNamedQuery(queryName).getResultList(); } } Originally I only had this peristence context annotation: @PersistenceContext(unitName="MyData") private EntityManager em; but had a null em when findWithNamedQuery was invoked. Then I annotated the methods as well, but em is still null (no injection?). I was wondering if this had something to do with my bean not being recognized as "Spring". I have done configuration as best I could following the directions in the article including setting the following in my context.xml file: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns:tx="http://www.springframework.org/schema/tx" tx:schemaLocation="http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.0.xsd"> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="MyData" /> <property name="dataSource" ref="dataSource" /> <property name="loadTimeWeaver" class="org.springframework.classloading.ReflectiveLoadTimeWeaver" /> <property name="jpaVendorAdapter" ref="jpaAdapter" /> </bean> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" /> <property name="url" value="jdbc:oracle:thin:@localhost:1521:MySID" /> <property name="username" value="user" /> <property name="password" value="password" /> <property name="initialSize" value="3" /> <property name="maxActive" value="10" /> </bean> <bean id="jpaAdapter" class="org.springframework.orm.jpa.vendor.EclipseLinkJpaVendorAdapter"> <property name="databasePlatform" value="org.eclipse.persistence.platform.database.OraclePlatform" /> <property name="showSql" value="true" /> </bean> <bean class="org.springframework.ormmjpa.support.PersistenceAnnotationBeanPostProcessor" /> <tx:annotation-driven /> </beans> I guessed that these belonged in the context.xml file because the article never specifically said which file is the "application context" file. If this is wrong, please let me know.

    Read the article

  • How do I get spring to inject my EntityManager?

    - by Trampas Kirk
    I'm following the guide here, but when the DAO executes, the EntityManager is null. I've tried a number of fixes I found in the comments on the guide, on various forums, and here (including this), to no avail. No matter what I seem to do the EntityManager remains null. Here are the relevant files, with packages etc changed to protect the innocent. spring-context.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd" xmlns:p="http://www.springframework.org/schema/p"> <context:component-scan base-package="com.group.server"/> <context:annotation-config/> <tx:annotation-driven/> <bean id="propertyPlaceholderConfigurer" class="com.group.DecryptingPropertyPlaceholderConfigurer" p:systemPropertiesModeName="SYSTEM_PROPERTIES_MODE_OVERRIDE"> <property name="locations"> <list> <value>classpath*:spring-*.properties</value> <value>classpath*:${application.environment}.properties</value> </list> </property> </bean> <bean id="orderDao" class="com.package.service.OrderDaoImpl"/> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="MyServer"/> <property name="loadTimeWeaver"> <bean class="org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver"/> </property> <property name="dataSource" ref="dataSource"/> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="${com.group.server.vendoradapter.showsql}"/> <property name="generateDdl" value="${com.group.server.vendoradapter.generateDdl}"/> <property name="database" value="${com.group.server.vendoradapter.database}"/> </bean> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> <property name="dataSource" ref="dataSource"/> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${com.group.server.datasource.driverClassName}"/> <property name="url" value="${com.group.server.datasource.url}"/> <property name="username" value="${com.group.server.datasource.username}"/> <property name="password" value="${com.group.server.datasource.password}"/> </bean> <bean id="executorService" class="java.util.concurrent.Executors" factory-method="newCachedThreadPool"/> </beans> persistence.xml <persistence xmlns="http://java.sun.com/xml/ns/persistence" version="1.0"> <persistence-unit name="MyServer" transaction-type="RESOURCE_LOCAL"/> </persistence> OrderDaoImpl package com.group.service; import com.group.model.Order; import org.springframework.stereotype.Repository; import org.springframework.transaction.annotation.Transactional; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import java.util.List; @Repository @Transactional public class OrderDaoImpl implements OrderDao { private EntityManager entityManager; @PersistenceContext public void setEntityManager(EntityManager entityManager) { this.entityManager = entityManager; } @Override public Order find(Integer id) { Order order = entityManager.find(Order.class, id); return order; } @Override public List<Order> findAll() { Query query = entityManager.createQuery("select o from Order o"); return query.getResultList(); } @Override public List<Order> findBySymbol(String symbol) { Query query = entityManager.createQuery("select o from Order o where o.symbol = :symbol"); return query.setParameter("symbol", symbol).getResultList(); } }

    Read the article

  • sqlite - any improvements for this attach code (running multiple sql commands transactionally in sql

    - by Greg
    Hi, Is this code solid? I've tried to use "using" etc. Basically a method to pass as sequenced list of SQL commands to be run against a Sqlite database. I assume it is true that in sqlite by default all commands run in a single connection are handled transactionally? Is this true? i.e. I should not have to (and haven't got in the code at the moment) a BeginTransaction, or CommitTransaction. It's using http://sqlite.phxsoftware.com/ as the sqlite ADO.net database provider. private int ExecuteNonQueryTransactionally(List<string> sqlList) { int totalRowsUpdated = 0; using (var conn = new SQLiteConnection(_connectionString)) { // Open connection (one connection so should be transactional - confirm) conn.Open(); // Apply each SQL statement passed in to sqlList foreach (string s in sqlList) { using (var cmd = new SQLiteCommand(conn)) { cmd.CommandText = s; totalRowsUpdated = totalRowsUpdated + cmd.ExecuteNonQuery(); } } } return totalRowsUpdated; }

    Read the article

  • Problem using Hibernate-Search

    - by KCore
    Hi, I am using hibernate search for my application. It is well configured and running perfectly till some time back, when it stopped working suddenly. The reason according to me being the number of my model (bean) classes. I have some 90 classes, which I add to my configuration, while building my Hibernate Configuration. When, I disable hibernate search (remove the search annotations and use Configuration instead of AnnotationsConfiguration), I try to start my application, it Works fine. But,the same app when I enable search, it just hangs up. I tried debugging and found the exact place where it hangs. After adding all the class to my AnnotationsConfiguration object, when I say cfg.buildSessionfactory(), It never comes out of that statement. (I have waited for hours!!!) Also when I decrease the number of my model classes (like say to half i.e. 50) it comes out of that statement and the application works fine.. Can Someone tell why is this happening?? My versions of hibernate are: hibernate-core-3.3.1.GA.jar hibernate-annotations-3.4.0.GA.jar hibernate-commons-annotations-3.1.0.GA.jar hibernate-search-3.1.0.GA.jar Also if need to avoid using AnnotationsConfiguration, I read that I need to configure the search event listeners explicitly.. can anyone list all the neccessary listeners and their respective classes? (I tried the standard ones given in Hibernate Search books, but they give me ClassNotFound exception and I have all the neccesarty libs in classpath) Here are the last few lines of hibernate trace I managed to pull : 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:32,814 INFO AnnotationConfiguration:369 - Hibernate Validator not found: ignoring 16:09:32,892 INFO ConnectionProviderFactory:95 - Initializing connection provider: org.hibernate.connection.C3P0ConnectionProvider 16:09:32,895 INFO C3P0ConnectionProvider:103 - C3P0 using driver: com.mysql.jdbc.Driver at URL: jdbc:mysql://localhost:3306/autolinkcrmcom_data 16:09:32,898 INFO C3P0ConnectionProvider:104 - Connection properties: {user=root, password=****} 16:09:32,900 INFO C3P0ConnectionProvider:107 - autocommit mode: false 16:09:33,694 INFO SettingsFactory:116 - RDBMS: MySQL, version: 5.1.37-1ubuntu5.1 16:09:33,696 INFO SettingsFactory:117 - JDBC driver: MySQL-AB JDBC Driver, version: mysql-connector-java-3.1.10 ( $Date: 2005/05/19 15:52:23 $, $Revision: 1.1.2.2 $ ) 16:09:33,701 INFO Dialect:175 - Using dialect: org.hibernate.dialect.MySQLDialect 16:09:33,707 INFO TransactionFactoryFactory:59 - Using default transaction strategy (direct JDBC transactions) 16:09:33,709 INFO TransactionManagerLookupFactory:80 - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) 16:09:33,711 INFO SettingsFactory:170 - Automatic flush during beforeCompletion(): disabled 16:09:33,714 INFO SettingsFactory:174 - Automatic session close at end of transaction: disabled 16:09:33,716 INFO SettingsFactory:181 - JDBC batch size: 15 16:09:33,719 INFO SettingsFactory:184 - JDBC batch updates for versioned data: disabled 16:09:33,721 INFO SettingsFactory:189 - Scrollable result sets: enabled 16:09:33,723 DEBUG SettingsFactory:193 - Wrap result sets: disabled 16:09:33,725 INFO SettingsFactory:197 - JDBC3 getGeneratedKeys(): enabled 16:09:33,727 INFO SettingsFactory:205 - Connection release mode: auto 16:09:33,730 INFO SettingsFactory:229 - Maximum outer join fetch depth: 2 16:09:33,732 INFO SettingsFactory:232 - Default batch fetch size: 1000 16:09:33,735 INFO SettingsFactory:236 - Generate SQL with comments: disabled 16:09:33,737 INFO SettingsFactory:240 - Order SQL updates by primary key: disabled 16:09:33,740 INFO SettingsFactory:244 - Order SQL inserts for batching: disabled 16:09:33,742 INFO SettingsFactory:420 - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory 16:09:33,744 INFO ASTQueryTranslatorFactory:47 - Using ASTQueryTranslatorFactory 16:09:33,747 INFO SettingsFactory:252 - Query language substitutions: {} 16:09:33,750 INFO SettingsFactory:257 - JPA-QL strict compliance: disabled 16:09:33,752 INFO SettingsFactory:262 - Second-level cache: enabled 16:09:33,754 INFO SettingsFactory:266 - Query cache: disabled 16:09:33,757 INFO SettingsFactory:405 - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge 16:09:33,759 INFO RegionFactoryCacheProviderBridge:61 - Cache provider: net.sf.ehcache.hibernate.EhCacheProvider 16:09:33,762 INFO SettingsFactory:276 - Optimize cache for minimal puts: disabled 16:09:33,764 INFO SettingsFactory:285 - Structured second-level cache entries: disabled 16:09:33,766 INFO SettingsFactory:314 - Statistics: disabled 16:09:33,769 INFO SettingsFactory:318 - Deleted entity synthetic identifier rollback: disabled 16:09:33,771 INFO SettingsFactory:333 - Default entity-mode: pojo 16:09:33,774 INFO SettingsFactory:337 - Named query checking : enabled 16:09:33,869 INFO Version:20 - Hibernate Search 3.1.0.GA 16:09:35,134 DEBUG DocumentBuilderIndexedEntity:157 - Field selection in projections is set to false for entity **com.xyz.abc**. recognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernaterecognized hibernateDocumentBuilderIndexedEntity Donno what the last line indicates ??? (hibernaterecognized....) After the last line it doesnt do anything (no trace too ) and just hangs....

    Read the article

  • Using a DataSet instead of custom business entities in soa and n-tier architecture

    - by kathy
    I’m working on a large and a high volume transactional enterprise application which has been designed using n-tire application architecture .And it was developed in the .NET platform utilizing C#,VB.NEt, Framework 3.5, ObjectDataSources, DataSet, WCF, asp.net update panel, JavaScript ,JSON, 3rd Party tools. The application is supposed to accomplish a really scalable / easily maintained / robust application / integrations, and to make sure that my services are created using a format that can be understood by other systems. The problem is, this application is about 70% complete but now I was wondering if the following would cause us future issues, I’m using a DataSet and a DataTable to (get /set) the data (form /to) the stored procedure in the database using the ObjectDataSources and was wondering if this would prevent my application from achieving the above goals. Actually, I am not anti-OO. I write lots of classes for different purposes, but I didn’t use the entity objects(custom business entities) instead of the previous way because I have a large database that may contain 50 tables and I was just afraid to create entities for each table and then in the future if I need to change the schema of the database, it might cause a huge affect on the application ?

    Read the article

  • Consolidate data from many different databases into one with minimum latency

    - by NTDLS
    I have 12 databases totaling roughly 1.0TB, each on a different physical server running SQL 2005 Enterprise - all with the same exact schema. I need to offload this data into a separate single database so that we can use for other purposes (reporting, web services, ect) with a maximum of 1 hour latency. It should also be noted that these servers are all in the same rack, connected by gigabit connections and that the inserts to the databases are minimal (Avg. 2500 records/hour). The current method is very flakey: The data is currently being replicated (SQL Server Transactional Replication) from each of the 12 servers to a database on another server (yes, 12 different employee tables from 12 different servers into a single employee table on a different server). Every table has a primary key and the rows are unique across all tables (there is a FacilityID in each table). What are my options, these has to be a simple way to do this.

    Read the article

  • inject a mockups to a bean that has @autowired annotations

    - by santiagozky
    I have a bean that has a couple of beans injected with the autowire annotation (no qualifier). Now, for testing reasons I want to inject some mocks to the bean instead of the ones being autowired (some DAOs). Is there any way I can change which bean is being injected without modifying my bean? I don't like the idea of adding annotations my code just to test it and then remove then for production. I am using spring 2.5. The bean look like this: @Transactional @Service("validaBusinesService") public class ValidaBusinesServiceImpl implements ValidaBusinesService { @Autowired OperationDAO operationDAO; @Autowired BinDAO binDAO; @Autowired CardDAO cardDAO; @Autowired UserDAO userDAO; ... ... }

    Read the article

  • DAO, Spring and Hibernate

    - by EugeneP
    Correct me if anything is wrong. Now when we use Spring DAO for ORM templates, when we use @Transactional attribute, we do not have control over the transaction and/or session when the method is called externally, not within the method. Lazy loading saves resources - less queries to the db, less memory to keep all the collections fetched in the app memory. So, if lazy=false, then everything is fetched, all associated collections, that is not effectively, if there are 10,000 records in a linked set. Now, I have a method in a DAO class that is supposed to return me a User object. It has collections that represent linked tables of the database. I need to get a object by id and then query its collections. Hibernate "failed to lazily initialize a collection" exception occurs when I try to access the linked collection that this DAO method returns. Explain please, what is a workaround here?

    Read the article

  • Dependency Injection with Spring/Junit/JPA

    - by Steve
    I'm trying to create JUnit tests for my JPA DAO classes, using Spring 2.5.6 and JUnit 4.8.1. My test case looks like this: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:config/jpaDaoTestsConfig.xml"} ) public class MenuItem_Junit4_JPATest extends BaseJPATestCase { private ApplicationContext context; private InputStream dataInputStream; private IDataSet dataSet; @Resource private IMenuItemDao menuItemDao; @Test public void testFindAll() throws Exception { assertEquals(272, menuItemDao.findAll().size()); } ... Other test methods ommitted for brevity ... } I have the following in my jpaDaoTestsConfig.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd"> <!-- uses the persistence unit defined in the META-INF/persistence.xml JPA configuration file --> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalEntityManagerFactoryBean"> <property name="persistenceUnitName" value="CONOPS_PU" /> </bean> <bean id="groupDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao" lazy-init="true" /> <bean id="permissionDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.PermissionDao" lazy-init="true" /> <bean id="applicationUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ApplicationUserDao" lazy-init="true" /> <bean id="conopsUserDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.ConopsUserDao" lazy-init="true" /> <bean id="menuItemDao" class="mil.navy.ndms.conops.common.dao.impl.jpa.MenuItemDao" lazy-init="true" /> <!-- enables interpretation of the @Required annotation to ensure that dependency injection actually occures --> <bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/> <!-- enables interpretation of the @PersistenceUnit/@PersistenceContext annotations providing convenient access to EntityManagerFactory/EntityManager --> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/> <!-- transaction manager for use with a single JPA EntityManagerFactory for transactional data access to a single datasource --> <bean id="jpaTransactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory"/> </bean> <!-- enables interpretation of the @Transactional annotation for declerative transaction managment using the specified JpaTransactionManager --> <tx:annotation-driven transaction-manager="jpaTransactionManager" proxy-target-class="false"/> </beans> Now, when I try to run this, I get the following: SEVERE: Caught exception while allowing TestExecutionListener [org.springframework.test.context.support.DependencyInjectionTestExecutionListener@fa60fa6] to prepare test instance [null(mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest)] org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'mil.navy.ndms.conops.common.dao.impl.MenuItem_Junit4_JPATest': Injection of resource fields failed; nested exception is java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:959) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireBeanProperties(AbstractAutowireCapableBeanFactory.java:329) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:110) at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:75) at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:255) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:93) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:130) at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:61) at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:54) at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:52) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Caused by: java.lang.IllegalStateException: Specified field type [interface javax.persistence.EntityManagerFactory] is incompatible with resource type [javax.persistence.EntityManager] at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.checkResourceType(InjectionMetadata.java:159) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$PersistenceElement.(PersistenceAnnotationBeanPostProcessor.java:559) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor$1.doWith(PersistenceAnnotationBeanPostProcessor.java:359) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:492) at org.springframework.util.ReflectionUtils.doWithFields(ReflectionUtils.java:469) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.findPersistenceMetadata(PersistenceAnnotationBeanPostProcessor.java:351) at org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor.postProcessMergedBeanDefinition(PersistenceAnnotationBeanPostProcessor.java:296) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyMergedBeanDefinitionPostProcessors(AbstractAutowireCapableBeanFactory.java:745) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:448) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory$1.run(AbstractAutowireCapableBeanFactory.java:409) at java.security.AccessController.doPrivileged(AccessController.java:219) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:380) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:264) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:221) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:261) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:185) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:168) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.autowireResource(CommonAnnotationBeanPostProcessor.java:435) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.getResource(CommonAnnotationBeanPostProcessor.java:409) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor$ResourceElement.getResourceToInject(CommonAnnotationBeanPostProcessor.java:537) at org.springframework.beans.factory.annotation.InjectionMetadata$InjectedElement.inject(InjectionMetadata.java:180) at org.springframework.beans.factory.annotation.InjectionMetadata.injectFields(InjectionMetadata.java:105) at org.springframework.context.annotation.CommonAnnotationBeanPostProcessor.postProcessAfterInstantiation(CommonAnnotationBeanPostProcessor.java:289) ... 18 more It seems to be telling me that its attempting to store an EntityManager object into an EntityManagerFactory field, but I don't understand how or why. My DAO classes accept both an EntityManager and EntityManagerFactory via the @PersistenceContext attribute, and they work find if I load them up and run them without the @ContextConfiguration attribute (i.e. if I just use the XmlApplcationContext to load the DAO and the EntityManagerFactory directly in setUp ()). Any insights would be appreciated. Thanks. --Steve

    Read the article

  • EF and design pattern

    - by kathy
    Hello, I’m working on a high volume transactional enterprise application(asp.net, windows app, oracle app as client) which has been designed using n-tire application and SOA architecture .The application was developed in the .NET platform utilizing C#,VB.NET, Framework 3.5 (I’m planning to upgrade to the , Framework 4.0), EF( EF in the data layer level) and WCF(WCF services in the service layer level) Since this is the first project using EF, and having read about using EF in n-tier and SOA applications, and the features available in the EF Feature, I have the following points: Which design pattern should I use in EF( Simple Entities, Change Set, Self-Tracking Entities and DTOs) in the data layer level In addition Which design pattern should I use in the other tier and layer to get the best practices of EF Thanks

    Read the article

  • Question about spring transaction propagation

    - by Yousui
    Hi guys, I have a question about spring transaction propagation. If I use @Transactional(propagation = Propagation.REQUIRED) to annotate a method m1. When execution logic enter m1, if there is already a transaction, m1 will use that one. When after m1, what about the transaction? It ends or still open?(if I call m1 in another method, and after the invocation there is still other things to do). In summary, I want to know when exiting an annotated method, the transaction ends or still open? Great thanks.

    Read the article

  • SQLServer Replication, dealing with updating data in a subscriber without merging with the publisher

    - by FlySwat
    To simplify my problem, picture two databases, one is a publisher, one is a replicated subscriber. Most of the time, changes are configured in the publisher, and pushed to the subscriber once per day using transactional replication. (The subscriber is the main production database). However, on a rare occasion, we need to make a change directly to the subscriber database. This does not break replication, however to ensure consistency, we also want to replicate this single change back to the publisher database. My first thought was to make both databases publishers to each other, but that does not work. Merge replication also does not work as that merges the data together, and we really only want to push that single change from the subscriber back to the publisher. Is there some other replication scheme I can use here, or do I need to invent my own thing?

    Read the article

  • Are these tables too big for SQL Server or Oracle

    - by Jeffrey Cameron
    Hey all, I'm not much of a database guru so I would like some advice. Background We have 4 tables that are currently stored in Sybase IQ. We don't currently have any choice over this, we're basically stuck with what someone else decided for us. Sybase IQ is a column-oriented database that is perfect for a data warehouse. Unfortunately, my project needs to do a lot of transactional updating (we're more of an operational database) so I'm looking for more mainstream alternatives. Question Given these tables' dimensions, would anyone consider SQL Server or Oracle to be a viable alternative? Table 1 : 172 columns * 32 million rows Table 2 : 453 columns * 7 million rows Table 3 : 112 columns * 13 million rows Table 4 : 147 columns * 2.5 million rows Given the size of data what are the things I should be concerned about in terms of database choice, server configuration, memory, platform, etc.?

    Read the article

  • GAE Entity Groups/Transaction

    - by bach
    Hi, Say you have a Client Buying Card object and a product object. When the client chooses the buy opition you create the object and then add a product. It should be transactional but it's not on the same entity group as the product and the card already been persisted, isn't it? Is there any way to overcome this simple scenario safely and easily? here's a code sample: Transaction tx = pm.currentTransaction(); tx.begin(); Product prod = pm.getObjectById(Product.class, "TV"); prod.setReserved(true); pm.makePersistent(prod); Card card = pm.getObjectById(Card.class, "user123"); /// <--- will thorw an exception as card and prod aren't on the same entity group card.setProd(prod); pm.makePersistent(card); try { tx.commit(); break; }

    Read the article

  • MySQL: Transactions across multiple threads

    - by Zombies
    Preliminary: I have an application which maintains a thread pool of about 100 threads. Each thread can last about 1-30 seconds before a new task replaces it. When a thread end, that thread almost always will result in inserting 1-3 records into a table, this table is used by all of the threads. Right now, no transactional support exists, but I am trying to add that now. So... Goal I want to implement a transaction for this. The rules for whether or not this transaction commits or rollback reside in the main thread. Basically there is a simple function that will return a boolean. Can I implement a transaction across multiple connections? If not, can multiple threads share the same connection? (Note: there are a LOT of inserts going on here, and that is a requirement).

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >