Search Results

Search found 38931 results on 1558 pages for 'database testing'.

Page 531/1558 | < Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >

  • Using Transaction Logging to Recover Post-Archived Essbase data

    - by Keith Rosenthal
    Data recovery is typically performed by restoring data from an archive.  Data added or removed since the last archive took place can also be recovered by enabling transaction logging in Essbase.  Transaction logging works by writing transactions to a log store.  The information in the log store can then be recovered by replaying the log store entries in sequence since the last archive took place.  The following information is recorded within a transaction log entry: Sequence ID Username Start Time End Time Request Type A request type can be one of the following categories: Calculations, including the default calculation as well as both server and client side calculations Data loads, including data imports as well as data loaded using a load rule Data clears as well as outline resets Locking and sending data from SmartView and the Spreadsheet Add-In.  Changes from Planning web forms are also tracked since a lock and send operation occurs during this process. You can use the Display Transactions command in the EAS console or the query database MAXL command to view the transaction log entries. Enabling Transaction Logging Transaction logging can be enabled at the Essbase server, application or database level by adding the TRANSACTIONLOGLOCATION essbase.cfg setting.  The following is the TRANSACTIONLOGLOCATION syntax: TRANSACTIONLOGLOCATION [appname [dbname]] LOGLOCATION NATIVE ENABLE | DISABLE Note that you can have multiple TRANSACTIONLOGLOCATION entries in the essbase.cfg file.  For example: TRANSACTIONLOGLOCATION Hyperion/trlog NATIVE ENABLE TRANSACTIONLOGLOCATION Sample Hyperion/trlog NATIVE DISABLE The first statement will enable transaction logging for all Essbase applications, and the second statement will disable transaction logging for the Sample application.  As a result, transaction logging will be enabled for all applications except the Sample application. A location on a physical disk other than the disk where ARBORPATH or the disk files reside is recommended to optimize overall Essbase performance. Configuring Transaction Log Replay Although transaction log entries are stored based on the LOGLOCATION parameter of the TRANSACTIONLOGLOCATION essbase.cfg setting, copies of data load and rules files are stored in the ARBORPATH/app/appname/dbname/Replay directory to optimize the performance of replaying logged transactions.  The default is to archive client data loads, but this configuration setting can be used to archive server data loads (including SQL server data loads) or both client and server data loads. To change the type of data to be archived, add the TRANSACTIONLOGDATALOADARCHIVE configuration setting to the essbase.cfg file.  Note that you can have multiple TRANSACTIONLOGDATALOADARCHIVE entries in the essbase.cfg file to adjust settings for individual applications and databases. Replaying the Transaction Log and Transaction Log Security Considerations To replay the transactions, use either the Replay Transactions command in the EAS console or the alter database MAXL command using the replay transactions grammar.  Transactions can be replayed either after a specified log time or using a range of transaction sequence IDs. The default when replaying transactions is to use the security settings of the user who originally performed the transaction.  However, if that user no longer exists or that user's username was changed, the replay operation will fail. Instead of using the default security setting, add the REPLAYSECURITYOPTION essbase.cfg setting to use the security settings of the administrator who performs the replay operation.  REPLAYSECURITYOPTION 2 will explicitly use the security settings of the administrator performing the replay operation.  REPLAYSECURITYOPTION 3 will use the administrator security settings if the original user’s security settings cannot be used. Removing Transaction Logs and Archived Replay Data Load and Rules Files Transaction logs and archived replay data load and rules files are not automatically removed and are only removed manually.  Since these files can consume a considerable amount of space, the files should be removed on a periodic basis. The transaction logs should be removed one database at a time instead of all databases simultaneously.  The data load and rules files associated with the replayed transactions should be removed in chronological order from earliest to latest.  In addition, do not remove any data load and rules files with a timestamp later than the timestamp of the most recent archive file. Partitioned Database Considerations For partitioned databases, partition commands such as synchronization commands cannot be replayed.  When recovering data, the partition changes must be replayed manually and logged transactions must be replayed in the correct chronological order. If the partitioned database includes any @XREF commands in the calc script, the logged transactions must be selectively replayed in the correct chronological order between the source and target databases. References For additional information, please see the Oracle EPM System Backup and Recovery Guide.  For EPM 11.1.2.2, the link is http://docs.oracle.com/cd/E17236_01/epm.1112/epm_backup_recovery_1112200.pdf

    Read the article

  • Two free SQL Server events I'll be presenting at in UK. Come and say hi!

    - by Mladen Prajdic
    SQLBits: April 7th - April 9th 2011 in Brighton, UK Free community event on Saturday (April 9th) with a paid conference day on Friday (April 8th) and a Pre Conference day full of day long seminars (April 7th). It'll be a huge event with over 800 attendees and over 20 MVPs. I'll be presenting on Saturday April 9th.     SQL in the City: July 15th 2011 in London, UK One day of free SQL Server training sponsored by Redgate. Other MVP's that'll be presenting there are Steve Jones (website|twitter), Brad McGehee (blog|twitter) and Grant Fritchey (blog|twitter)   At both conferences I'll be presenting about database testing. In the sessions I'll cover a few things from my book The Red Gate Guide to SQL Server Team based Development like what do we need for testing, how to go about it, what are some of the obstacles we have to overcome, etc… If you're around there come and say Hi!

    Read the article

  • Two Sun Certification Exams To Retire August 1, 2010

    - by Paul Sorensen
    Effective August 1, 2010, Exam CX-310-400 ("Sun Certified Integrator for Identity Manager 7.1"), currently part of the "Sun Certified Integrator for Identity Manager 7.1" certification track, will be retired. We will also retire Exam CX-310-502 ("Sun Certified Java CAPS Integrator"), currently within the "Sun Certified Java CAPS Integrator" certification track. Both exams will remain available for registration and testing at Prometric Testing Centers through July 31, 2010.CREDENTIAL VALIDITYPlease note that that these credentials remain valid indefinitely for those holding the certifications. These retirements therefore have no effect on those who complete the certification requirements before August 1, 2010.QUICK LINKSRetiring Exams:Exam CX-310-400 "Sun Certified Integrator for Identity Manager 7.1"Exam CX-310-502 "Sun Certified Java CAPS Integrator" Certification Tracks:Sun Certified Integrator for Identity Manager 7.1Sun Certified Java CAPS IntegratorLearn more: Oracle Certification Retirements

    Read the article

  • Can't Install php5-msql

    - by user210445
    Hello friends I'm finishing the process of installing Apache/Php/mysql installations but this shows up: # sudo apt-get install mysql-server php5-msql Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php5-msql After some adjustments this happened: angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Is there a better approach in migrating SIT SVN to UAT SVN?

    - by huahsin68
    In web development, given a same piece of source code, and being deploy to SIT (system integration testing) SVN/WAS and UAT (user acceptance testing) SVN/WAS. Please take note that I am using Jenkins to build everything. I have already ensured the transition from SIT SVN to UAT SVN are sync by doing manual diff on the 2 directory. Usually I will ensure the SIT WAS is working fine then only deploy to UAT WAS. But now there is a problem show up in UAT WAS and it is working fine in SIT WAS. I am suspecting there is a migration fault happened between SIT SVN to UAT SVN. In such a given scenario, is there a better approach to handle this problem?

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-31

    - by Bob Rhubart
    SOA Suite 11g Asynchronous Testing with soapUI | Greg Mally Greg Mally walks you through testing asynchronous web services with the free edition of soapUI. The Role of Oracle VM Server for SPARC in a Virtualization Strategy | Matthias Pfutzner Matthias Pfutzner's overview of hardware and software virtualization basics, and the role that Oracle VM Server for SPARC plays in a virtualization strategy. Cloud Computing: Oracle RDS on AWS - Connecting with DB tools | Tom Laszewski Cloud expert and author Tom Laszewski shares brief comments about the tools he used to connect two Oracle RDS instances in AWS. Keystore Wallet File – cwallet.sso – Zum Teufel! | Christian Screen "One of the items that trips up a FMW implementation, if only for mere minutes, is the cwallet.sso file," says Oracle ACE Christian Screen. In this short post he offers information to help you avoid landing on your face. Thought for the Day "With good program architecture debugging is a breeze, because bugs will be where they should be." — David May Source: SoftwareQuotes.com

    Read the article

  • What label of tests are BizUnit tests?

    - by charlie.mott
    BizUnit is defined as a "Framework for Automated Testing of Distributed Systems.  However, I've never seen a catchy label to describe what sort of tests we create using this framework. They are not really “Unit Tests” that's for sure. "Integration Tests" might be a good definition, but I want a label that clearly separates it from the manual "System Integration Testing" phase of a project where real instances of the integrated systems are used. Among some colleagues, we brainstormed some suggestions: Automated Integration Tests Stubbed Integration Tests Sandbox Integration Tests Localised Integration Tests All give a good view of the sorts of tests that are being done. I think "Stubbed Integration Tests" is most catchy and descriptive. So I will use that until someone comes up with a better idea.

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • Attachment handling for web application with Jackrabbit

    - by Andrea Girardi
    I need to manage attachments on my Spring web application and I thought to use an open source repository. My app it's a job approval system using J2EE / SPRING 3 Framework and Postgress DB to allow user to tracks the job,right through every step of the approval process. It is a fully managed, collaborative system that operates from a central server and is accessed by a standard internet browser. An user should be able to add an attach to a request or an approval step, so, I though to use Jackrabbit with Postgres database persistence manager. I took a look to this post: http://onjava.com/pub/a/onjava/2006/10/04/what-is-java-content-repository.html?page=1 It's really interesting but, I've some question about this kind of solution :- I seen that Jackrabbit standalone as a Derby database embedded solution for persistence, is it enough for a professional use of the repository with more than 50 request / days (with attachment) ? Is there a reason for which I should use another database manager for persistence instead of the default one ?

    Read the article

  • Can Separation of Duties Deter Cybercrime? YES!

    - by roxana.bradescu
    According to the CERT 2010 CyberSecurity Watch Survey: The public may not be aware of the number of incidents because almost three-quarters (72%), on average, of the insider incidents are handled internally without legal action or the involvement of law enforcement. However, cybercrimes committed by insiders are often more costly and damaging than attacks from outside. When asked what security policies and procedures supported or played a role in the deterrence of a potential cybercriminal, 36% said technically-enforced segregation of duties. In fact, many data protection regulations call for separation of duties and enforcement of least privilege. Oracle Database Security solutions can help you meet these requirements and prevent insider threats by preventing privileged IT staff from accessing the data they are charged with managing, ensuring developers and testers don't have access to production data, making sure that all database activity is monitored and audited to prevent abuse, and more. All without changes to your existing applications or costly infrastructure investments. To learn more, watch our Oracle Database Management Separation of Duties for Security and Regulatory Compliance webcast.

    Read the article

  • CAMeditor v1.9 &ndash; thoughts and reflections

    - by david.webber(at)oracle.com
    We recently published the latest iteration of the CAMeditor tool on Sourceforge.net including more enhancements to the NIEM capabilities. This release represented an incremental improvement over the prior version with mostly bug fixes and patches. We’re now working on the full v2.0 release which will feature substantial improvements and new features in practically all areas.  Most importantly we are improving the dictionary handling and providing the ability to visually design new exchange schema directly from dictionary sets of components. In addition we are doing some interim release work on 1.9.x with patches and enhancements particularly to support running on Ubuntu and non-Windows platforms. And we are also providing an Ant script based deployment for the CAMV validation engine so you can do unit testing of batches of templates and XML instance samples using command line scripts. More updates will be forthcoming as we make early release versions available for testing purposes.

    Read the article

  • No databases showing in phpMyAdmin

    - by Thein Hla Maw
    My website is hosted in shared hosting service and is working fine with updated news stored in MySQL database. To manage the database of website, I install phpMyAdmin in a sub-folder with the same username and password used in website. When I login to phpMyAdmin, I don't see my database. phpMyAdmin is showing "No databases" in left pane. Is there any thing I need to configure in phpMyAdmin? Edited: This is the settings in config.inc.php. I can login to phpMyAdmin successfully. $cfg['Servers'][$i]['host'] = 'hostname'; $cfg['Servers'][$i]['port'] = ''; $cfg['Servers'][$i]['socket'] = ''; $cfg['Servers'][$i]['connect_type'] = 'tcp'; $cfg['Servers'][$i]['extension'] = 'mysqli'; $cfg['Servers'][$i]['auth_type'] = 'cookie'; $cfg['Servers'][$i]['user'] = 'dbuser'; $cfg['Servers'][$i]['password'] = 'password';

    Read the article

  • What are the downsides of leaving automation tags in production code?

    - by joshin4colours
    I've been setting up debug tags for automated testing of a GWT-based web application. This involves turning on custom debug id tags/attributes for elements in the source of the app. It's a non-trivial task, particularly for larger, more complex web applications. Recently there's been some discussion of whether enabling such debug ids is a good idea to do across the board. Currently the debug ids are only turned on in development and testing servers, not in production. There have been points raised that enabling debug ids does cause performance to take a hit, and that debug ids in production may lead to security issues. What are benefits of doing this? Are there any significant risks for turning on debug tags in production code?

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • Deploying a very simple application

    - by vanna
    I have a very simple working console application written in C++ linked with a light static library. It is just for testing purposes. Now that the coding part is done, I would like to know the process of actually deploying the program. I wrote a very basic CMakeLists.txt that create makefiles or VS projects to build the sources. I also have a program that calls the static library in order to make some google tests. To me, the distribution of this application goes like this : to developpers : the src directory with the CMakeLists.txt file (multi-platform distribution) with a README.txt and an INSTALL.txt to users : the executable and a README.txt git repo : everything mentionned above plus the sources for testing and the gtest external lib A this point : considering the complexity of my application, am I doing it right ? Is there any reference that would formalize this deployment process so I can get better and go further ? Say I would like to add dynamic libraries that can be updated, external libraries like boost : how should I package this to deploy it in a professionnal way ?

    Read the article

  • Mobile phone detection (brand, model, browser etc)

    - by SyaZ
    What do you use to detect visitor's mobile phone, down to the model if possible? Currently we maintain our own database but it's really getting behind due to lack of manpower to maintain it, so we decided to give 3rd party solution a try. These are my candidates but I don't have time to really try them all: DeviceAtlas - 1 year personal evaluation, but basic license is affordable. Their database look solid with daily update and user-contributed tests / updates. I am favoring this one at the moment. DetectRight - I was recommended this by a colleague but really can't find much from their site. 20k devices -- really? WURFL - Open source, database collaboratively derived from UAProf. So basically if you're going with UAProf solution, you're better off with WURFL. DetectMoBileBrowsers - This looks like the simplest of all. Too bad it's language dependent (PHP). Please share your experience or suggestions!

    Read the article

  • /etc/postfix/transport missing; what should it look like?

    - by Thufir
    I'm following the mailman guide but couldn't locate /etc/postfix/ so created it as follows: root@dur:~# root@dur:~# cat /etc/postfix/transport dur.bounceme.net mailman: root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo fqdn_test 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from:[email protected] 250 2.1.0 Ok rcpt to:thufir@localhost 451 4.3.0 <thufir@localhost>: Temporary lookup failure rcpt to:[email protected] 451 4.3.0 <[email protected]>: Temporary lookup failure quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 02:05:15 dur postfix/smtpd[20326]: connect from localhost[127.0.0.1] Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:10 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "thufir@localhost" Aug 28 02:06:10 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <thufir@localhost>: Temporary lookup failure; from=<[email protected]> to=<thufir@localhost> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases is unavailable. open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 02:06:23 dur postfix/smtpd[20326]: warning: hash:/var/lib/mailman/data/aliases lookup error for "[email protected]" Aug 28 02:06:23 dur postfix/smtpd[20326]: NOQUEUE: reject: RCPT from localhost[127.0.0.1]: 451 4.3.0 <[email protected]>: Temporary lookup failure; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<fqdn_test> Aug 28 02:06:28 dur postfix/smtpd[20326]: disconnect from localhost[127.0.0.1] Aug 28 02:06:49 dur dovecot: pop3-login: Login: user=<thufir>, method=PLAIN, rip=127.0.0.1, lip=127.0.0.1, mpid=20338, TLS Aug 28 02:06:49 dur dovecot: pop3(thufir): Disconnected: Logged out top=0/0, retr=0/0, del=0/0, size=0 root@dur:~# The manual page is here.

    Read the article

  • SO-Aware @ TechReady (Microsoft Event)

    - by SURESH GIRIRAJAN
    A session on SO-Aware is presented at Microsoft TechReady event this week check here for more details : http://tellagostudios.com/blog/so-aware-highlighted-microsoft-techready Check here for more details on SO-Aware and how to leverage within your enterprise if you’re using BizTalk Server, WCF Services and services build on Azure. It provides lot of capability such as: o    Centralized service repository o    Centralized configuration management o    Service testing o    Monitoring o    Transparent integration with technologies such as Visual Studio, BizTalk Server, Windows Server & Azure AppFabric among many others o    SO-Aware Test Workbench provides developers with a visually rich environment to model and control the execution of load and functional tests in a SOA infrastructure. This tool includes the first native WCF load testing engine allowing developers to transparently load test applications built on Microsoft's service oriented technologies such as WCF, BizTalk Server or the Windows Server or Azure AppFabric.

    Read the article

  • How to convince my boss to improve code quality?

    - by Vimvq1987
    The place I'm working for is a service provider. We have a lot of services, which are written to deal with deadline, so their code are really terrible: No coding convention, everyone codes in his own style No unit testing (which is really bad) No refactoring (which is truly worse) No automation build/deployment etc and these code are used again and again, so bad code continue to spread all over my department. I really want to set up a standard quality for our code, by requiring everyone to follow "rules": every line of code which does not follow convention will be rejected, and every function of code which does not pass unit testing will not be committed,...But I don't know how to convince my boss to allow me to do this. I'm relatively new comer, so inspiring people from my works is really hard, and I think it's easier if my boss support me to this. Thank you very much for your advices

    Read the article

  • New Whitepaper: Deploying E-Business Suite on Exadata and Exalogic

    - by Elke Phelps (Oracle Development)
    Our E-Business Suite Performance Team recently published a new whitepaper to assist you with deploying E-Business Suite on the Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine , also referred to as Exastack.  If you are considering a migration to Exastack, this new whitepaper will assist you understanding sizing requirements, deployment standards and migration strategies: Deploying Oracle E-Business Suite on Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine (Note 1460742.1) This whitepaper covers the following topics: Scalability and Sizing Examples - provides performance benchmark analysis with concurrent user counts, scaling analysis and sizing recommendations Deployment Standards - includes recommendations for deploying the various components of the E-Business Suite architecture on Exastack Migration Standards and Guidelines - includes an overview of methods for migrating from commodity hardware to Exastack References Our Maximum Availability Architecture (MAA) team has a number of whitepapers that provide additional information regarding Oracle E-Business Suite on the Oracle Exadata Database Machine.  Their library of whitepapers may be found here: MAA Best Practices - Oracle Applications Unlimited  Related Articles Running E-Business Suite on Exadata V2 Running Oracle E-Business Suite on Exalogic Elastic Cloud

    Read the article

  • MVC - Business Logic

    - by BriskLabs Pakistan
    I have created a MVC based simple java application. its helps the user to add records through data forms to database..... i want that the data that i put into the database as a record is worked upon i.e by performing calculations on it. the original data should remain unaffected. while the new data after calculations performed must be stored as a new entity record into database. Where should i write the code for this background calculation .. as it is the rules and business logic... in a new java beans file... Please guide. regards

    Read the article

  • SQLIO Writes

    - by Grant Fritchey
    SQLIO is a fantastic utility for testing the abilities of the disks in your system. It has a very unfortunate name though, since it's not really a SQL Server testing utility at all. It really is a disk utility. They ought to call it DiskIO because they'd get more people using I think. Anyway, branding is not the point of this blog post. Writes are the point of this blog post. SQLIO works by slamming your disk. It performs as mean reads as it can or it performs as many writes as it can depending on how you've configured your tests. There are much smarter people than me who will get into all the various types of tests you should run. I'd suggest reading a bit of what Jonathan Kehayias (blog|twitter) has to say or wade into Denny Cherry's (blog|twitter) work. They're going to do a better job than I can describing all the benefits and mechanisms around using this excellent piece of software. My concerns are very focused. I needed to set up a series of tests to see how well our product SQL Storage Compress worked. I wanted to know the effects it would have on a system, the disk for sure, but also memory and CPU. How to stress the system? SQLIO of course. But when I set it up and ran it, following the documentation that comes with it, I was seeing better than 99% compression on the files. Don't get me wrong. Our product is magnificent, wonderful, all things great and beautiful, gets you coffee in the morning and is made mostly from bacon. But 99% compression. No, it's not that good. So what's up? Well, it's the configuration. The default mechanism is to load up a file, something large that will overwhelm your disk cache. You're instructed to load the file with a character 0x0. I never got a computer science degree. I went to film school. Because of this, I didn't memorize ASCII tables so when I saw this, I thought it was zero's or something. Nope. It's NULL. That's right, you're making a very large file, but you're filling it with NULL values. That's actually ok when all you're testing is the disk sub-system. But, when you want to test a compression and decompression, that can be an issue. I got around this fairly quickly. Instead of generating a file filled with NULL values, I just copied a database file for my tests. And to test it with SQL Storage Compress, I used a database file that had already been run through compression (about 40% compression on that file if you're interested). Now the reads were taken care of. I am seeing very realistic performance from decompressing the information for reads through SQLIO. But what about writes? Well, the issue is, what does SQLIO write? I don't have access to the code. But I do have access to the results. I did two different tests, just to be sure of what I was seeing. First test, use the .DAT file as described in the documentation. I opened the .DAT file after I was done with SQLIO, using WordPad. Guess what? It's a giant file full of air. SQLIO writes NULL values. What does that do to compression? I did the test again on a copy of an uncompressed database file. Then I ran the original and the SQLIO modified copy through ZIP to see what happened. I got better than 99% compression out of the SQLIO modified file (original file of 624,896kb went to 275,871kb compressed, after SQLIO it went to 608kb compressed). So, what does SQLIO write? It writes air. If you're trying to test it with compression or maybe some other type of file storage mechanism like dedupe, you need to know this because your tests really won't be valid. Should I find some other mechanism for testing? Yeah, if all I'm interested in is establishing performance to my own satisfaction, yes. But, I want to be able to compare my results with other people's results and we all need to be using the same tool in order for that to happen. SQLIO is the common mechanism that most people I know use to establish disk performance behavior. It'd be better if we could get SQLIO to do writes in some other fashion. Oh, and before I go, I get to brag a bit. Measuring IOPS, SQL Storage Compress outperforms my disk alone by about 30%.

    Read the article

< Previous Page | 527 528 529 530 531 532 533 534 535 536 537 538  | Next Page >