Search Results

Search found 347 results on 14 pages for 'timestamps'.

Page 11/14 | < Previous Page | 7 8 9 10 11 12 13 14  | Next Page >

  • Oracle Solaris Crash Analysis Tool 5.3 now available

    - by user12609056
    Oracle Solaris Crash Analysis Tool 5.3 The Oracle Solaris Crash Analysis Tool Team is happy to announce the availability of release 5.3.  This release addresses bugs discovered since the release of 5.2 plus enhancements to support Oracle Solaris 11 and updates to Oracle Solaris versions 7 through 10. The packages are available on My Oracle Support - simply search for Patch 13365310 to find the downloadable packages. Release Notes General blast support The blast GUI has been removed and is no longer supported. Oracle Solaris 2.6 Support As of Oracle Solaris Crash Analysis Tool 5.3, support for Oracle Solaris 2.6 has been dropped. If you have systems running Solaris 2.6, you will need to use Oracle Solaris Crash Analysis Tool 5.2 or earlier to read its crash dumps. New Commands Sanity Command Though one can re-run the sanity checks that are run at tool start-up using the coreinfo command, many users were unaware that they were. Though these checks can still be run using that command, a new command, namely sanity, can now be used to re-run the checks at any time. Interface Changes scat_explore -r and -t option The -r option has ben added to scat_explore so that a base directory can be specified and the -t op[tion was added to enable color taggging of the output. The scat_explore sub-command now accepts new options. Usage is: scat --scat_explore [-atv] [-r base_dir] [-d dest] [unix.N] [vmcore.]N Where: -v Verbose Mode: The command will print messages highlighting what it's doing. -a Auto Mode: The command does not prompt for input from the user as it runs. -d dest Instructs scat_explore to save it's output in the directory dest instead of the present working directory. -r base_dir Instructs scat_explore to save it's under the directory base_dir instead of the present working directory. If it is not specified using the -d option, scat_explore names it's output file as "scat_explore_system_name_hostid_lbolt_value_corefile_name." -t Enable color tags. When enabled, scat_explore tags important text with colors that match the level of importance. These colors correspond to the color normally printed when running Oracle Solaris Crash Analysis Tool in interactive mode. Tag Name Definition FATAL An extremely important message which should be investigated. WARNING A warning that may or may not have anything to do with the crash. ERROR An error, usually printer with a suggested command ALERT Used to indicate something the tool discovered. INFO Purely informational message INFO2 A follow-up to an INFO tagged message REDZONE Usually used when prnting memory info showing something is in the kernel's REDZONE. N The number of the crash dump. Specifying unix.N vmcore.N is optional and not required. Example: $ scat --scat_explore -a -v -r /tmp vmcore.0 #Output directory: /tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 #Tar filename: scat_explore_oomph_833a2959_0x28800_vmcore.0.tar #Extracting crash data... #Gathering standard crash data collections... #Panic string indicates a possible hang... #Gathering Hang Related data... #Creating tar file... #Compressing tar file... #Successful extraction SCAT_EXPLORE_DATA_DIR=/tmp/scat_explore_oomph_833a2959_0x28800_vmcore.0 Sending scat_explore results The .tar.gz file that results from a scat_explore run may be sent using Oracle Secure File Transfer. The Oracle Secure File Transfer User Guide describes how to use it to send a file. The send_scat_explore script now has a -t option for specifying a to address for sending the results. This option is mandatory. Known Issues There are a couple known issues that we are addressing in release 5.4, which you should expect to see soon: Display of timestamps in threads and clock information is incorrect in some cases. There are alignment issues with some of the tables produced by the tool.

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

  • Wicket, Spring and Hibernate - Testing with Unitils - Error: Table not found in statement [select re

    - by John
    Hi there. I've been following a tutorial and a sample application, namely 5 Days of Wicket - Writing the tests: http://www.mysticcoders.com/blog/2009/03/10/5-days-of-wicket-writing-the-tests/ I've set up my own little project with a simple shoutbox that saves messages to a database. I then wanted to set up a couple of tests that would make sure that if a message is stored in the database, the retrieved object would contain the exact same data. Upon running mvn test all my tests fail. The exception has been pasted in the first code box underneath. I've noticed that even though my unitils.properties says to use the 'hdqldb'-dialect, this message is still output in the console window when starting the tests: INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect. I've added the entire dump from the console as well at the bottom of this post (which goes on for miles and miles :-)). Upon running mvn test all my tests fail, and the exception is: Caused by: java.sql.SQLException: Table not found in statement [select relname from pg_class] at org.hsqldb.jdbc.Util.sqlException(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.fetchResult(Unknown Source) at org.hsqldb.jdbc.jdbcStatement.executeQuery(Unknown Source) at org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:188) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.initSequences(DatabaseMetadata.java:151) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:69) at org.hibernate.tool.hbm2ddl.DatabaseMetadata.(DatabaseMetadata.java:62) at org.springframework.orm.hibernate3.LocalSessionFactoryBean$3.doInHibernate(LocalSessionFactoryBean.java:958) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:419) ... 49 more I've set up my unitils.properties file like so: database.driverClassName=org.hsqldb.jdbcDriver database.url=jdbc:hsqldb:mem:PUBLIC database.userName=sa database.password= database.dialect=hsqldb database.schemaNames=PUBLIC My abstract IntegrationTest class: @SpringApplicationContext({"/com/upbeat/shoutbox/spring/applicationContext.xml", "applicationContext-test.xml"}) public abstract class AbstractIntegrationTest extends UnitilsJUnit4 { private ApplicationContext applicationContext; } applicationContext-test.xml: <?xml version="1.0" encoding="UTF-8"? <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-2.5.xsd" <bean id="dataSource" class="org.unitils.database.UnitilsDataSourceFactoryBean"/ </beans and finally, one of the test classes: package com.upbeat.shoutbox.web; import org.apache.wicket.spring.injection.annot.test.AnnotApplicationContextMock; import org.apache.wicket.util.tester.WicketTester; import org.junit.Before; import org.junit.Test; import org.unitils.spring.annotation.SpringBeanByType; import com.upbeat.shoutbox.HomePage; import com.upbeat.shoutbox.integrations.AbstractIntegrationTest; import com.upbeat.shoutbox.persistence.ShoutItemDao; import com.upbeat.shoutbox.services.ShoutService; public class TestHomePage extends AbstractIntegrationTest { @SpringBeanByType private ShoutService svc; @SpringBeanByType private ShoutItemDao dao; protected WicketTester tester; @Before public void setUp() { AnnotApplicationContextMock appctx = new AnnotApplicationContextMock(); appctx.putBean("shoutItemDao", dao); appctx.putBean("shoutService", svc); tester = new WicketTester(); } @Test public void testRenderMyPage() { //start and render the test page tester.startPage(HomePage.class); //assert rendered page class tester.assertRenderedPage(HomePage.class); //assert rendered label component tester.assertLabel("message", "If you see this message wicket is properly configured and running"); } } Dump from console when running mvn test: [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building shoutbox [INFO] task-segment: [test] [INFO] ------------------------------------------------------------------------ [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 3 resources [INFO] Copying 4 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. build is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 2 resources [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [surefire:test {execution: default-test}] [INFO] Surefire report directory: F:\Projects\shoutbox\target\surefire-reports INFO - ConfigurationLoader - Loaded main configuration file unitils-default.properties from classpath. INFO - ConfigurationLoader - Loaded custom configuration file unitils.properties from classpath. INFO - ConfigurationLoader - No local configuration file unitils-local.properties found. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.upbeat.shoutbox.web.TestViewShoutsPage Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.02 sec INFO - Version - Hibernate Annotations 3.4.0.GA INFO - Environment - Hibernate 3.3.0.SP1 INFO - Environment - hibernate.properties not found INFO - Environment - Bytecode provider name : javassist INFO - Environment - using JDK 1.4 java.sql.Timestamp handling INFO - Version - Hibernate Commons Annotations 3.1.0.GA INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [org/springframework/jdbc/support/sql-error-codes.xml] INFO - SQLErrorCodesFactory - SQLErrorCodes loaded: [DB2, Derby, H2, HSQL, Informix, MS-SQL, MySQL, Oracle, PostgreSQL, Sybase] INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@3e0ebb: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - sPathXmlApplicationContext - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@a8e586: display name [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]; startup date [Tue May 04 18:19:58 CEST 2010]; root of context hierarchy INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [com/upbeat/shoutbox/spring/applicationContext.xml] INFO - XmlBeanDefinitionReader - Loading XML bean definitions from class path resource [applicationContext-test.xml] INFO - DefaultListableBeanFactory - Overriding bean definition for bean 'dataSource': replacing [Generic bean: class [org.apache.commons.dbcp.BasicDataSource]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=close; defined in class path resource [com/upbeat/shoutbox/spring/applicationContext.xml]] with [Generic bean: class [org.unitils.database.UnitilsDataSourceFactoryBean]; scope=singleton; abstract=false; lazyInit=false; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null; defined in class path resource [applicationContext-test.xml]] INFO - sPathXmlApplicationContext - Bean factory for application context [org.springframework.context.support.ClassPathXmlApplicationContext@a8e586]: org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1 INFO - pertyPlaceholderConfigurer - Loading properties file from class path resource [application.properties] INFO - DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy INFO - AnnotationBinder - Binding entity from annotated class: com.upbeat.shoutbox.models.ShoutItem INFO - QueryBinder - Binding Named query: item.getById = from ShoutItem item where item.id = :id INFO - QueryBinder - Binding Named query: item.find = from ShoutItem item order by item.timestamp desc INFO - QueryBinder - Binding Named query: item.count = select count(item) from ShoutItem item INFO - EntityBinder - Bind entity com.upbeat.shoutbox.models.ShoutItem on table SHOUT_ITEMS INFO - AnnotationConfiguration - Hibernate Validator not found: ignoring INFO - notationSessionFactoryBean - Building new Hibernate SessionFactory INFO - earchEventListenerRegister - Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. INFO - ConnectionProviderFactory - Initializing connection provider: org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider INFO - SettingsFactory - RDBMS: HSQL Database Engine, version: 1.8.0 INFO - SettingsFactory - JDBC driver: HSQL Database Engine Driver, version: 1.8.0 INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - TransactionFactoryFactory - Transaction strategy: org.springframework.orm.hibernate3.SpringTransactionFactory INFO - actionManagerLookupFactory - No TransactionManagerLookup configured (in JTA environment, use of read-write or transactional second-level cache is not recommended) INFO - SettingsFactory - Automatic flush during beforeCompletion(): disabled INFO - SettingsFactory - Automatic session close at end of transaction: disabled INFO - SettingsFactory - JDBC batch size: 1000 INFO - SettingsFactory - JDBC batch updates for versioned data: disabled INFO - SettingsFactory - Scrollable result sets: enabled INFO - SettingsFactory - JDBC3 getGeneratedKeys(): disabled INFO - SettingsFactory - Connection release mode: auto INFO - SettingsFactory - Default batch fetch size: 1 INFO - SettingsFactory - Generate SQL with comments: disabled INFO - SettingsFactory - Order SQL updates by primary key: disabled INFO - SettingsFactory - Order SQL inserts for batching: disabled INFO - SettingsFactory - Query translator: org.hibernate.hql.ast.ASTQueryTranslatorFactory INFO - ASTQueryTranslatorFactory - Using ASTQueryTranslatorFactory INFO - SettingsFactory - Query language substitutions: {} INFO - SettingsFactory - JPA-QL strict compliance: disabled INFO - SettingsFactory - Second-level cache: enabled INFO - SettingsFactory - Query cache: enabled INFO - SettingsFactory - Cache region factory : org.hibernate.cache.impl.bridge.RegionFactoryCacheProviderBridge INFO - FactoryCacheProviderBridge - Cache provider: org.hibernate.cache.HashtableCacheProvider INFO - SettingsFactory - Optimize cache for minimal puts: disabled INFO - SettingsFactory - Structured second-level cache entries: disabled INFO - SettingsFactory - Query cache factory: org.hibernate.cache.StandardQueryCacheFactory INFO - SettingsFactory - Echoing all SQL to stdout INFO - SettingsFactory - Statistics: disabled INFO - SettingsFactory - Deleted entity synthetic identifier rollback: disabled INFO - SettingsFactory - Default entity-mode: pojo INFO - SettingsFactory - Named query checking : enabled INFO - SessionFactoryImpl - building session factory INFO - essionFactoryObjectFactory - Not binding factory to JNDI, no JNDI name configured INFO - UpdateTimestampsCache - starting update timestamps cache at region: org.hibernate.cache.UpdateTimestampsCache INFO - StandardQueryCache - starting query cache at region: org.hibernate.cache.StandardQueryCache INFO - notationSessionFactoryBean - Updating database schema for Hibernate SessionFactory INFO - Dialect - Using dialect: org.hibernate.dialect.PostgreSQLDialect INFO - DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@5dfaf1: defining beans [propertyConfigurer,dataSource,sessionFactory,shoutService,shoutItemDao,wicketApplication,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager]; root of factory hierarchy Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.34 sec <<< FAILURE! Running com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0 sec <<< FAILURE! Running com.upbeat.shoutbox.mocks.ShoutServiceTest Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.01 sec <<< FAILURE! Results : Tests in error: initializationError(com.upbeat.shoutbox.web.TestViewShoutsPage) testRenderMyPage(com.upbeat.shoutbox.web.TestHomePage) initializationError(com.upbeat.shoutbox.integrations.ShoutItemIntegrationTest) initializationError(com.upbeat.shoutbox.mocks.ShoutServiceTest) Tests run: 4, Failures: 0, Errors: 4, Skipped: 0 [INFO] ------------------------------------------------------------------------ [ERROR] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] There are test failures. Please refer to F:\Projects\shoutbox\target\surefire-reports for the individual test results. [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Tue May 04 18:19:58 CEST 2010 [INFO] Final Memory: 13M/31M [INFO] ------------------------------------------------------------------------ Any help is greatly appreciated.

    Read the article

  • ASP.Net FormsAuthentication Redirect Loses the cookie between Redirect and Application_AuthenticateR

    - by Joel Etherton
    I have a FormsAuthentication cookie that is persistent and works independently in a development, test, and production environment. I have a user that can authenticate, the user object is created, the authentication cookie is added to the response: 'Custom object to grab the TLD from the url authCookie.Domain = myTicketModule.GetTopLevelDomain(Request.ServerVariables("HTTP_HOST")) FormsAuthentication.SetAuthCookie(authTicket.Name, False) Response.SetCookie(authCookie) The user gets processed a little bit to check for a first time login, security questions, etc, and is then redirected with the following tidbit: Session.Add("ForceRedirect", "/FirstTimeLogin.aspx") Response.Redirect("~/FirstTimeLogin.aspx", True) With a debug break, I can verify that the cookie collection holds both a cookie not related to authentication that I set for a different purpose and the formsauthentication cookie. Then the next step in the process occurs at the ApplicationAuthenticateRequest in the global.asax: Sub Application_AuthenticateRequest(ByVal sender As Object, ByVal e As EventArgs) Dim formsCookieName As String = myConfigurationManager.AppSettings("FormsCookieName") Dim authCookie As HttpCookie = Request.Cookies(formsCookieName) At this point, for this ONE user authCookie is nothing. I have 15,000 other users who are not impacted in this manner. However, for one user the cookie just vanishes without a trace. I've seen this before with w3wp.exe exceptions, state server exceptions and other IIS process related exceptions, but I'm getting no exceptions in the event log. w3wp.exe is not crashing, the state server has some timeouts but they appear unrelated (as verified by timestamps) and it only happens to this one user on this one domain (this code is used across 2 different TLDs with approximately 10 other subdomains). One avenue I'm investigating is that the cookie might just be too large. I would think that there would be a check for the size of the cookie going into the response, and I wouldn't think it would impact it this way. Any ideas why the request might dumping the cookie? NOTE: The secondary cookie I mentioned that I set also gets dumped (and it's very tiny). EDIT-NOTE: The session token is NOT lost when this happens. However, since the authentication cookie is lost, it is ignored and replaced on a subsequent login.

    Read the article

  • Silverlight and Encryption, how to store/generate they key/iv pair?

    - by cmaduro
    I have a Silverlight app that connects to a php webservice. I want to encrypt the communication between the webservice and the Silverlight client. I'm not relying on SSL. I'm encrypting/decrypting the POST string myself using AES 256bit Key and IV. The big questions then are: How do I generate a random unique key/iv pair in PHP. How do I share this key/iv pair between the web service and silverlight client in a secure way. It seems impossible without having some kind of hard coded key or iv on the client. Which would compromise security. This is a public website, there are no logins. Just the requirement of secure communication. I can hard code the seed for the key/iv (which is hashed with SHA256 with a time stamp salt and then assigned as the key or iv) in PHP source code, that's on the server so that is pretty safe. However on the client the seed for the key/iv pair would be visible, if it is hard coded. Further more using a time stamp as the basis for uniqueness/randomness is definitely not ok, since timestamps are predictable. It does however provide a common factor between the C# code and the PHP code. The only other option that I can think of would be to have a 3rd service involved that provides the key/iv to the Silverlight client, as well as the php webservice. This of course start the cycle anew, with the question of how to store the credentials for accessing the key/iv distribution service on the Silverlight client. Sounds like the solution is then asymmetric encryption, since sensitive data will be viewed only on the administrative back end of the website. Unfortunately Silverlight has no asymmetric encryption classes. The solution? Roll my own Diffie-Hellman key exchange! Plug that key into AES256!

    Read the article

  • Using @Context, @Provider and ContextResolver in JAX-RS

    - by Tamás
    I'm just getting acquainted with implementing REST web services in Java using JAX-RS and I ran into the following problem. One of my resource classes requires access to a storage backend, which is abstracted away behind a StorageEngine interface. I would like to inject the current StorageEngine instance into the resource class serving the REST requests and I thought a nice way of doing this would be by using the @Context annotation and an appropriate ContextResolver class. This is what I have so far: In MyResource.java: class MyResource { @Context StorageEngine storage; [...] } In StorageEngineProvider.java: @Provider class StorageEngineProvider implements ContextResolver<StorageEngine> { private StorageEngine storage = new InMemoryStorageEngine(); public StorageEngine getContext(Class<?> type) { if (type.equals(StorageEngine.class)) return storage; return null; } } I'm using com.sun.jersey.api.core.PackagesResourceConfig to discover the providers and the resource classes automatically, and according to the logs, it picks up the StorageEngineProvider class nicely (timestamps and unnecessary stuff left out intentionally): INFO: Root resource classes found: class MyResource INFO: Provider classes found: class StorageEngineProvider However, the value of storage in my resource class is always null - neither the constructor of StorageEngineProvider nor its getContext method is called by Jersey, ever. What am I doing wrong here?

    Read the article

  • FullCalendar and events: function

    - by Ernest
    I am trying to display my events from a MySQL database. I'm using the events function. My XML file being returned is pretty basic. I've looked at all of the FullCalendar questions and most of them talk about JSON and point to the documentation for JSON. I can't use JSON. I have to go XML. Can you tell me where I'm off. Here is a sample of what my xml looks like: Grow Your Business on the Web 2010-06-05T9:30 2010-06-05T12:30 O The whole file is prefaced with a tag and closed with a tag. My jquery is as follows: $(document).ready(function() { $('#calendar').fullCalendar({ height: 550, theme: true, header: { left: 'prev,next today', center: 'title', right: 'month,agendaWeek,agendaDay' }, editable: true, events: function(start, end, callback) { $.ajax({ url: 'ncludeFiles/sbdp-cal-xml.php', dataType: 'xml', data: { // our hypothetical feed requires UNIX timestamps start: Math.round(start.getTime() / 1000), end: Math.round(end.getTime() / 1000) }, success: function(doc) { var events = []; $(doc).find('event').each(function() { events.push({ title: $(this).attr('title'), start: $(this).attr('start'), end: $(this).attr('end'), className: $(this).attr('className'), url: $(this).attr('url') }); }); callback(events); } }); } }); }); I'd appreciate any help you could give me. Thanks!

    Read the article

  • jqgrid not updating data on reload

    - by meepmeep
    I have a jqgrid with data loading from an xml stream (handled by django 1.1.1): jQuery(document).ready(function(){ jQuery("#list").jqGrid({ url:'/downtime/list_xml/', datatype: 'xml', mtype: 'GET', postData:{site:1,date_start:document.getElementById('datepicker_start').value,date_end:document.getElementById('datepicker_end').value}, colNames:[...], colModel :[...], pager: '#pager', rowNum: 25, rowList:[10,25,50], viewrecords: true, height: 500, caption: 'Click on column headers to reorder' }); $("#grid_reload").click(function(){ $("#list").trigger("reloadGrid"); }); $("#tabs").tabs(); $("#datepicker_start").datepicker({dateFormat: 'yy-mm-dd'}); $("#datepicker_end").datepicker({dateFormat: 'yy-mm-dd'}); ... And the html elements: <th>Start Date:</th> <td><input id="datepicker_start" type="text" value="2009-12-01"></input></td> <th>End Date:</th> <td><input id="datepicker_end" type="text" value="2009-12-03"></input></td> <td><input id="grid_reload" type="submit" value="load" /></td> When I click the grid_reload button, the grid reloads, but when it has done so it shows exactly the same data as before, even though the xml is tested to return different data for different timestamps. I have checked using alert(document.getElementById('datepicker_start').value) that the values in the date inputs are passed correctly when the reload event is triggered. Any ideas why the data doesn't update? A caching or browser issue perhaps?

    Read the article

  • Date query with Hibernate on Timestamp Column in PostgreSQL

    - by Shashikant Kore
    A table has timestamp column. A sample value in that could be 2010-03-30 13:42:42. With Hibernate, I am doing a range query Restrictions.between("column-name", fromDate, toDate). The Hibernate mapping for this column is as follows. <property name="orderTimestamp" column="order_timestamp" type="java.util.Date" /> Let's say, I want to find out all the records that have the date 30th March 2010 and 31st March 2010. A range query on this field is done as follows. Date fromDate = new SimpleDateFormat("yyyy-MM-dd").parse("2010-03-30"); Date toDate = new SimpleDateFormat("yyyy-MM-dd").parse("2008-03-31"); Expression.between("orderTimestamp", fromDate, toDate); This doesn't work. The query is converted to respective timestamps as "2010-03-30 00:00:00" and "2010-03-31 00:00:00". So, all the records for the 31st March 2010 are ignored. A simple solution to this problem could be to have the end date as "2010-03-31 23:59:59." But, I would like to know if there is way to match only the date part of the timestamp column. Also, is Expression.between() inclusive of both limits? Documentation doesn't throw any light on this.

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • Parsing a file in C

    - by sfactor
    I need parse through a file and do some processing into it. The file is a text file and the data is a variable length data of the form "PP1004181350D001002003..........". So there will be timestamps if there is PP so 1004181350 is 2010-04-08 13:50. The ones where there are D are the data points that are three separate data each three digits long, so D001002003 has three coordonates of 001, 002 and 003. Now I need to parse this data from a file for which I need to store each timestamp into a array and the corresponding datas into arrays that has as many rows as the number of data and three rows for each co-ordinate. The end array might be like TimeStamp[1] = "135000", low[1] = "001", medium[1] = "002", high[1] = "003" TimeStamp[2] = "135015", low[2] = "010", medium[2] = "012", high[2] = "013" TimeStamp[3] = "135030", low[3] = "051", medium[3] = "052", high[3] = "043" .... The question is how do I go about doing this in C? How do I go through this string looking for these patterns? Note: Here the seconds value in timestamp is added on our own as it is known at each data comes after 15 seconds.

    Read the article

  • java regex: capture multiline sequence between tokens

    - by Guillaume
    I'm struggling with regex for splitting logs files into log sequence in order to match pattern inside these sequences. log format is: timestamp fieldA fieldB fieldn log message1 timestamp fieldA fieldB fieldn log message2 log message2bis timestamp fieldA fieldB fieldn log message3 The timestamp regex is known. I want to extract every log sequence (potentialy multiline) between timestamps. And I want to keep the timestamp. I want in the same time to keep the exact count of lines. What I need is how to decorate timestamp pattern to make it split my log file in log sequence. I can not split the whole file as a String, since the file content is provided in a CharBuffer Here is sample method that will be using this log sequence matcher: private void matches(File f, CharBuffer cb) { Matcher sequenceBreak = sequencePattern.matcher(cb); // sequence matcher int lines = 1; int sequences = 0; while (sequenceBreak.find()) { sequences++; String sequence = sequenceBreak.group(); if (filter.accept(sequence)) { System.out.println(f + ":" + lines + ":" + sequence); } //count lines Matcher lineBreak = LINE_PATTERN.matcher(sequence); while (lineBreak.find()) { lines++; } if (sequenceBreak.end() == cb.limit()) { break; } } }

    Read the article

  • MySQL & PHP - select/option lists and showing data to users that still allows me to generate queries

    - by Andrew Heath
    Sorry for the unclear title, an example will clear things up: TABLE: Scenario_victories ID scenid timestamp userid side playdate 1 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 2 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 3 RtBr001 2010-03-15 17:13:51 7 2 2010-03-10 ID and timestamp are auto-insertions by the database when the other 4 fields are added. The first thing to note is that a user can record multiple playings of the same scenario (scenid) on the same date (playdate) possibly with the same outcome (side = winner). Hence the need for the unique ID and timestamps for good measure. Now, on their user page, I'm displaying their recorded play history in a <select><option>... list form with 2 buttons at the end - Delete Record and Go to Scenario My script takes the scenid and after hitting a few other tables returns with something more user-friendly like: (playdate) (from scenid) (from side) ######################################################### # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Soviet Union won # ######################################################### [Delete Record] [Go To Scenario] in HTML: <select name="history" size=3> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Soviet Union won</option> </select> Now, if you were to highlight the first record and click Go to Scenario there is enough information there for me to parse it and produce the exact scenario you want to see. However, if you were to select Delete Record there is not - I have the playdate and I can parse the scenid and side from what's listed, but in this example all three records would have the same result. I appear to have painted myself into a corner. Does anyone have a suggestion as to how I can get some unique identifying data (ID and/or timestamp) to ride along on this form without showing it to the user? PHP-only please, I must be NoScript compliant!

    Read the article

  • Building simple Reddit scraper

    - by Bazant Fundator
    Let's say that I would like to make a collection of images from reddit for my own amusement. I have ran the code on my development env and It haven't gone past the first page of posts (anything beyond requries the after string from the JSON. Additionally, When I turn on the validation, the whole loop breaks if the item doesn't pass it, not just the current iteration. I would be glad If you helped me understand mistakes I made. class Link include Mongoid::Document include Mongoid::Timestamps field :author, type: String field :url, type: String validates_uniqueness_of :url, # no duplicates validates :url, uniqueness :true end def fetch (count, after) count_s = count.to_s # convert count to string link = "http://reddit.com/r/aww/.json?count="+count_s+"&after="+after #so it can be used there res = HTTParty.get(link) # GET req. to the reddit server json = JSON.parse(res.body) # Parse the response if json['kind'] == "Listing" then # check if the retrieved item is a Listing for i in 1...(count) do # for each list item datum = json['data']['children'][i]['data'] #i-th element properties if datum['domain'].in?(["imgur.com", "i.imgur.com"]) then # fetch only imgur links Link.create!(author: datum['author'], url: datum['url']) # save to db end end count += 25 fetch(count, json['data']['after']) # if it retrieved the right kind of object, move on to the next page end end fetch(25," ") # run it

    Read the article

  • MongoMapper won't let me create an object

    - by Jade
    I'm just learning MongoDB and MongoMapper. This is on Rails 3. I created a blog in app/models/blog.rb: class Blog include MongoMapper::Document key :title, String, :required => true key :body, Text timestamps! end I go into the Rails console: rails c Loading development environment (Rails 3.0.0.beta) ruby-1.9.1-p378 > **b = Blog.new** NoMethodError: undefined method `from_mongo' for Text:Module from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/keys.rb:323:in `get' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/keys.rb:269:in `read_key' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/keys.rb:224:in `[]' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/inspect.rb:7:in `block in inspect' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/inspect.rb:6:in `collect' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/mongo_mapper-0.7.2/lib/mongo_mapper/plugins/inspect.rb:6:in `inspect' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/railties-3.0.0.beta/lib/rails/commands/console.rb:47:in `start' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/railties-3.0.0.beta/lib/rails/commands/console.rb:8:in `start' from /Users/jade/.rvm/gems/ruby-1.9.1-p378/gems/railties-3.0.0.beta/lib/rails/commands.rb:34:in `<top (required)>' from /Users/jade/code/farmerjade/script/rails:10:in `require' from /Users/jade/code/farmerjade/script/rails:10:in `<main>' Am I overlooking something really dumb, or is this something in my setup? I'm using the mongo_mapper version you get by adding it to your Gemfile, so I'm wondering if it might be that. I'd appreciate any suggestions!

    Read the article

  • log4j vs. System.out.println - logger advantages?

    - by wishi_
    Hi! I'm newly using log4j in a project. A fellow programmer told me that using System.out.println is considered bas style and that log4j is something like standard for logging matters nowadays. We do lots of JUnit testing - System.out stuff turns out to be harder to test. Therefore I began utilizing log4j for a Console controller class, that's just handling commandline parameters. // some logger config org.apache.log4j.BasicConfigurator.configure(); Logger logger = LoggerFactory.getLogger(Console.class); Category cat = Category.getRoot(); Seems to work: logger.debug("String"); Produces: 1 [main] DEBUG project.prototype.controller.Console - String I got two questions regarding this: From my basic understanding using this logger should provide me comfortable options to write a logfile with timestamps - instead of spamming the console - if debug mode is enabled at the logger? Why is System.out.println harder to test? I searched stackoverflow and found a testing recipe. So I wonder what kind of advantage I really get by using log4j.

    Read the article

  • Implementing the tree with reference to the root for each leaf

    - by AntonAL
    Hi, i implementing a products catalog, that looks, like this: group 1 subgroup 1 subgroup 2 item 1 item 2 ... item n ... subgroup n group 2 subgroup 1 ... subgroup n group 3 ... group n The Models: class CatalogGroup < ActiveRecord::Base has_many: catalog_items has_many :catalog_items_all, :class_name => "CatalogItem", :foreign_key => "catalog_root_group_id" end class CatalogItem < ActiveRecord::Base belongs_to :catalog_group belongs_to :catalog_root_group, :class_name => "CatalogGroup" end Migrations: class CreateCatalogItems < ActiveRecord::Migration def self.up create_table :catalog_items do |t| t.integer :catalog_group_id t.integer :catalog_root_group_id t.string :code t.timestamps end end For convenience, i referenced each CatalogItem to it's top-most CatalogGroup and named this association "catalog_root_group". This will give us the simple implementation of search request, like "show me all items in group 1". We will have a deal only with CatalogModel.catalog_root_group The problem is - this association does't work. I always get "catalog_root_group" equals to nil Also, i have tried to overcome the using of reference to root group ("catalog_root_group"), but i cannot construct appropriate search request in ruby ... Do you know, how to do it ?

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

  • Some regular expression help?

    - by Rohan
    Hey there. I'm trying to create a Regex javascript split, but I'm totally stuck. Here's my input: 9:30 pm The user did action A. 10:30 pm Welcome, user John Doe. ***This is a comment 11:30 am This is some more input. I want the output array after the split() to be (I've removed the \n for readability): ["9:30 pm The user did action A.", "10:30 pm Welcome, user John Doe.", "***This is a comment", "11:30 am This is some more input." ]; My current regular expression is: var split = text.split(/\s*(?=(\b\d+:\d+|\*\*\*))/); This works, but there is one problem: the timestamps get repeated in extra elements. So I get: ["9:30", "9:30 pm The user did action A.", "10:30", "10:30 pm Welcome, user John Doe.", "***This is a comment", "11:30", "11:30 am This is some more input." ]; I cant split on the newlines \n because they aren't consistent, and sometimes there may be no newlines at all. Could you help me out with a Regex for this? Thanks so much!!

    Read the article

  • UUIDs in Rails3

    - by Rob Wilkerson
    I'm trying to setup my first Rails3 project and, early on, I'm running into problems with either uuidtools, my UUIDHelper or perhaps callbacks. I'm obviously trying to use UUIDs and (I think) I've set things up as described in Ariejan de Vroom's article. I've tried using the UUID as a primary key and also as simply a supplemental field, but it seems like the UUIDHelper is never being called. I've read many mentions of callbacks and/or helpers changing in Rails3, but I can't find any specifics that would tell me how to adjust. Here's my setup as it stands at this moment (there have been a few iterations): # migration class CreateImages < ActiveRecord::Migration def self.up create_table :images do |t| t.string :uuid, :limit => 36 t.string :title t.text :description t.timestamps end end ... end # lib/uuid_helper.rb require 'rubygems' require 'uuidtools' module UUIDHelper def before_create() self.uuid = UUID.timestamp_create.to_s end end # models/image.rb class Image < ActiveRecord::Base include UUIDHelper ... end Any insight would be much appreciated. Thanks.

    Read the article

  • Saving a record in Authlogic table

    - by denniss
    I am using authlogic to do my authentication. The current model that serves as the authentication model is the user model. I want to add a "belongs to" relationship to user which means that I need a foreign key in the user table. Say the foreign key is called car_id in the user's model. However, for some reason, when I do u = User.find(1) u.car_id = 1 u.save! I get ActiveRecord::RecordInvalid: Validation failed: Password can't be blank My guess is that this has something to do with authlogic. I do not have validation on password on the user's model. This is the migration for the user's table. def self.up create_table :users do |t| t.string :email t.string :first_name t.string :last_name t.string :crypted_password t.string :password_salt t.string :persistence_token t.string :single_access_token t.string :perishable_token t.integer :login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.integer :failed_login_count, :null => false, :default => 0 # optional, see Authlogic::Session::MagicColumns t.datetime :last_request_at # optional, see Authlogic::Session::MagicColumns t.datetime :current_login_at # optional, see Authlogic::Session::MagicColumns t.datetime :last_login_at # optional, see Authlogic::Session::MagicColumns t.string :current_login_ip # optional, see Authlogic::Session::MagicColumns t.string :last_login_ip # optional, see Authlogic::Session::MagicColumns t.timestamps end end And later I added the car_id column to it. def self.up add_column :users, :user_id, :integer end Is there anyway for me to turn off this validation?

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • Discover periodic patterns in a large data-set

    - by Miner
    I have a large sequence of tuples on disk in the form (t1, k1) (t2, k2) ... (tn, kn) ti is a monotonically increasing timestamp and ki is a key (assume a fixed length string if needed). Neither ti nor ki are guaranteed to be unique. However, the number of unique tis and kis is huge (millions). n itself is very large (100 Million+) and the size of k (approx 500 bytes) makes it impossible to store everything in memory. I would like to find out periodic occurrences of keys in this sequence. For example, if I have the sequence (1, a) (2, b) (3, c) (4, b) (5, a) (6, b) (7, d) (8, b) (9, a) (10, b) The algorithm should emit (a, 4) and (b, 2). That is a occurs with a period of 4 and b occurs with a period of 2. If I build a hash of all keys and store the average of the difference between consecutive timestamps of each key and a std deviation of the same, I might be able to make a pass, and report only the ones that have an acceptable std deviation(ideally, 0). However, it requires one bucket per unique key, whereas in practice, I might have very few really periodic patterns. Any better ways?

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14  | Next Page >