Search Results

Search found 8979 results on 360 pages for 'backup sessions'.

Page 122/360 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • Advice on cloning disk

    - by hks
    I'm going to buy a second disk for backup, the same size as my laptops. I want to mount it in a casing via usb and backup an entire hdd every soemtime. That's because I want the posibility to just switch drives in case of something goes wrong. I'm using Linux and obviously the right tool seems to be dd. The thing is that my laptop drive has a speed of around 50-70 MB/s and usb 2.0 is 57 MB/s. So to copy my 250GB disk should take me more than 1 hour if I'm lucky. I can't wait this much. I want some differential backup. I read one of JWZ articles. In it he gives more details for using rsync on Mac. He writes that there is possibility of making rsync'ed disk bootable. So my question is: how to make rsync'ed hdd bootable under Linux or are there other 'quick backup' tools for Linux that would allow me to just swap drives? Or should I just stick to dd :( ?

    Read the article

  • Provide a user with service start/stop permissions

    - by slakr007
    I have a very basic domain that I use for development. I want to create a GPO that provides users in the Backup Operators group with start/stop permissions for two specific services on a specific server. I have read several articles about this, and they all indicate that this is very easy. Create a GPO, give the user start/stop permissions to the services under Computer Configuration Policies Windows Settings Security Settings System Services, and voila. Done. Not so much, but I have to be doing something wrong. My install is pretty much the default. The domain controller is in the Domain Controllers OU, the Backup Operators group is under Builtin, and I created a user called Backup under Users. I created a GPO and linked it to the Domain Controllers OU. In the GPO I give the Backup user permission to start/stop two specific services on the server. I forced an update with gpupdate. I used Group Policy Results to verify that my GPO is the winning GPO giving the user the permission to start/stop the two services. However, the user is still unable to start/stop the services. I attempted different loopback settings on the GPO to no avail. I'm sort of at a loss here.

    Read the article

  • Five Reasons to Attend PLM Summit 2013: The Conference Formerly Known as AGILITY

    - by Terri Hiskey
    As we approach the end of 2012, we are also closing in on the last couple of weeks that Agile customers and prospects can register for the upcoming PLM Summit 2013 for the bargain early bird rate of $195. Register now to secure your spot! The Conference Formerly Known as AGILITY... Long-time Agile customers may remember AGILITY, which was Agile's PLM customer conference that was held on an annual basis prior to Oracle's acquisiton of Agile in 2007. In February 2012, due to feedback we received from our Agile PLM community, we successfully resurrected the AGILITY conference and renamed it the PLM Summit. The PLM Summit was so well received and well-attended, that we are doing it again in 2013. This upcoming PLM Summit is being co-located in San Francisco under the overarching banner of the Oracle Value Chain Summit, and will be held alongside several other Oracle customer conferences that cover a range of value chain solutions, including Value Chain Planning, Value Chain Execution, Procurement, Maintenance and Manufacturing. This setup offers PLM attendees the best of all worlds--the opportunity to participate and learn about PLM in smaller, focused sessions by product and by industry, while also giving attendees the chance to see how PLM works together with other critical enterprise applications that address other important aspects of the value chain. Top Five Reasons to Attend the PLM Summit 2013 In the spirit of all of the end-of-the-year lists that are currently popping up, here is a list of the top five reasons to attend the PLM Summit for anyone out there needs a little extra encouragement to register: 1. The Best Opportunities for Customer Networking   The PLM Summit offers attendees numerous opportunities to learn and network with fellow Agile users. Customer stories are featured in keynote and breakout presentations and the schedule allows for plenty of networking time during breakfasts, lunches, breaks and dinners. Customer networking is the number one reason that Agile users attend the PLM Summit. Read what attendees thought of the most recent PLM Summit: "Hearing about the implementation of Agile products from a customers’ perspective is invaluable." - Director of Quality Assurance & Regulatory Affairs, leading medical device manufacturer "Understanding the scope of other companies’ projects and the lessons learned made attending this event well worth my time." - Director of Test Engineering, global industrial manufacturer "The most beneficial thing about attending this event is the opportunity to network with other customers with similar experiences." - Director of Business Process Improvement, leading high technology company Come to the PLM Summit and play an active role within the PLM community: swap war stories and business cards, connect on LinkedIn and Facebook, share your stories and discuss the sessions from each day. Register now! 2. It's Educational! The PLM Summit is the premier educational event for anyone in the Agile PLM community. There are nearly 40 PLM-focused in-depth educational sessions led by Agile PLM experts, customers and partners that will cover a range of specific product and industry-focused topics. Keynotes will give attendees a broad overview of the entire Agile PLM footprint, while sessions will delve deeply into specific product functionality and customer case studies. There is truly something for everyone. Check out the latest agenda for view of all the sessions. 3. Visit with the PLM Partner Community Our partners play a significant and important role within the Agile PLM community. At the PLM Summit, attendees will be able to meet and mingle with several of the top Oracle Agile PLM partners including: Deloitte, Domain, GoEngineer, Hitachi Consulting, IBM, Kalypso, KPIT Cummins (CPG Solutions), Perception Software, Verdant, Xavor and ZeroWaitState. Go here for a complete list of all the Value Chain Summit sponsors. 4. See Agile PLM in Action at our Dedicated PLM Demo Pods At the PLM Summit, attendees will have the chance to see Agile PLM in action at dedicated PLM demo pods, manned by expert members of our Agile PLM team. If you would like to see up close specific Agile PLM functionality, or if you have a question on how to extend the scope of your current implemention or if you want a better understanding of how to leverage Agile PLM to address specific use-cases, stop by one of the Agile PLM demo pods and engage the Agile PLM experts on hand at the PLM Summit. 5. Spend Some Time in Lovely San Francisco Still on the fence about the upcoming PLM Summit? Remember that it is being held in San Francisco, which is a fantastic city for a getaway. After spending time learning and networking about PLM, take an extra day or two to escape the dreary winter and enjoy the beautiful scenery and the unique actitivies offered only by the City by the Bay. You will walk away from the conference not only with renewed excitement about Agile PLM, but feeling rejuvenated in general.

    Read the article

  • Interview with Ronald Bradford about MySQL Connect

    - by Keith Larson
    Ronald Bradford,  an Oracle ACE Director has been busy working with  database consulting, book writing (EffectiveMySQL) while traveling and speaking around the world in support of MySQL. I was able to take some of his time to get an interview on this thoughts about theMySQL Connect conference. Keith Larson: What where your thoughts when you heard that Oracle was going to provide the community the MySQL Conference ?Ronald Bradford: Oracle has already been providing various different local community events including OTN Tech Days and  MySQL community days. These are great for local regions both in the US and abroad.  In previous years there has been an increase of content at Oracle Open World, however that benefits the Oracle community far more then the MySQL community.  It is good to see that Oracle is realizing the benefit in providing a large scale dedicated event for the MySQL community that includes speakers from the MySQL development teams, invested companies in the ecosystem and other community evangelists.I fully expect a successful event and look forward to hopefully seeing MySQL Connect at the upcoming Brazil and Japan OOW conferences and perhaps an event on the East Coast.Keith Larson: Since you are part of the content committee, what did you think of the submissions that were received during call for papers?Ronald Bradford: There was a large number of quality submissions to the number of available presentation sessions. As with the previous years as a committee member for the annual MySQL conference, there is always a large variety of common cornerstone MySQL features as well as new products and upcoming companies sharing their MySQL experiences. All of the usual major players in the ecosystem will in presenting at MySQL Connect including Facebook, Twitter, Yahoo, Continuent, Percona, Tokutek, Sphinx and Amazon to name a few.  This is ensuring the event will have a large number of quality speakers and a difficult time in choosing what to attend. Keith Larson: What sessions do you look forwarding to attending? Ronald Bradford: As with most quality conferences you can only be in one place at one time, so with multiple tracks per session it is always difficult to decide. The continued work and success with MySQL Cluster, and with a number of sessions I am sure will be popular. The features that interest me the most are around the optimizer, where there are several sessions on new features, and on the importance of backups. There are three presentations in this area to choose from.Keith Larson: Are you going to cover any of the content in your books at your MySQL Connect sessions?Ronald Bradford: I will be giving two presentations at MySQL Connect. The first will include the techniques available for creating better indexes where I will be touching on some aspects of the first Effective MySQL book on Optimizing SQL Statements.  In my second presentation from experiences of managing 500+ AWS MySQL instances, I will be touching on areas including SQL tuning, backup and recovery and scale out with replication.   These are the key topics of the initial books in the Effective MySQL series that focus on performance, scalability and business continuity.  The books however cover a far greater amount of detail then can be presented in a 1 hour session. Keith Larson: What features of MySQL 5.6 do you look forward to the most ?Ronald Bradford: I am very impressed with the optimizer trace feature. The ability to see exposed information is invaluable not just for MySQL 5.6, but to also apply information discerned for optimizing SQL statements in earlier versions of MySQL.  Not everybody understands that it is easy to deploy a MySQL 5.6 slave into an existing topology running an older version if MySQL for evaluation of many new features.  You can use the new mysqlbinlog streaming feature for duplicating master binary logs on an older version with a MySQL 5.6 slave.  The improvements in instrumentation in the Performance Schema are exciting.   However, as with my upcoming Replication Techniques in Depth title, that will be available for sale at MySQL Connect, there are numerous replication features, some long overdue with provide significant management benefits. Crash Save Slaves, Global transaction Identifiers (GTID)  and checksums just to mention a few.Keith Larson: You have been to numerous conferences, what would you recommend for people at the conference? Ronald Bradford: Make the time to meet and introduce yourself to the speakers that cover the topics that most interest you. The MySQL ecosystem has a very strong community.  The relationships you build with presenters, developers and architects in MySQL can be invaluable, however they are created over time. Get to know these people, interact with them over time.  This is the opportunity to learn more then just the content from a 1 hour session. Keith Larson: Any additional tips to handling the long hours ? Ronald Bradford: Conferences can be hard, especially with all the post event drinking.  This is a two day event and I am sure will include additional events on Friday and Saturday night so come well prepared, and leave work behind. Take the time to learn something new.   You can always catchup on sleep later. Keith Larson: Thank you so much for taking some time to do this I look forward to seeing you at the MySQL Connect conference.  Please stay tuned here for more updates on MySQL. 

    Read the article

  • Test-Driven Development Problem

    - by Zeck
    Hi guys, I'm newbie to Java EE 6 and i'm trying to develop very simple JAX-RS application. RESTfull web service working fine. However when I ran my test application, I got the following. What have I done wrong? Or am i forget any configuration? Of course i'm create a JNDI and i'm using Netbeans 6.8 IDE. In finally, thank you for any advise. My Entity: @Entity @Table(name = "BOOK") @NamedQueries({ @NamedQuery(name = "Book.findAll", query = "SELECT b FROM Book b"), @NamedQuery(name = "Book.findById", query = "SELECT b FROM Book b WHERE b.id = :id"), @NamedQuery(name = "Book.findByTitle", query = "SELECT b FROM Book b WHERE b.title = :title"), @NamedQuery(name = "Book.findByDescription", query = "SELECT b FROM Book b WHERE b.description = :description"), @NamedQuery(name = "Book.findByPrice", query = "SELECT b FROM Book b WHERE b.price = :price"), @NamedQuery(name = "Book.findByNumberofpage", query = "SELECT b FROM Book b WHERE b.numberofpage = :numberofpage")}) public class Book implements Serializable { private static final long serialVersionUID = 1L; @Id @Basic(optional = false) @Column(name = "ID") private Integer id; @Basic(optional = false) @Column(name = "TITLE") private String title; @Basic(optional = false) @Column(name = "DESCRIPTION") private String description; @Basic(optional = false) @Column(name = "PRICE") private double price; @Basic(optional = false) @Column(name = "NUMBEROFPAGE") private int numberofpage; public Book() { } public Book(Integer id) { this.id = id; } public Book(Integer id, String title, String description, double price, int numberofpage) { this.id = id; this.title = title; this.description = description; this.price = price; this.numberofpage = numberofpage; } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } public double getPrice() { return price; } public void setPrice(double price) { this.price = price; } public int getNumberofpage() { return numberofpage; } public void setNumberofpage(int numberofpage) { this.numberofpage = numberofpage; } @Override public int hashCode() { int hash = 0; hash += (id != null ? id.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Book)) { return false; } Book other = (Book) object; if ((this.id == null && other.id != null) || (this.id != null && !this.id.equals(other.id))) { return false; } return true; } @Override public String toString() { return "com.entity.Book[id=" + id + "]"; } } My Junit test class: public class BookTest { private static EntityManager em; private static EntityManagerFactory emf; public BookTest() { } @BeforeClass public static void setUpClass() throws Exception { emf = Persistence.createEntityManagerFactory("E01R01PU"); em = emf.createEntityManager(); } @AfterClass public static void tearDownClass() throws Exception { em.close(); emf.close(); } @Test public void createBook() { Book book = new Book(); book.setId(1); book.setDescription("Mastering the Behavior Driven Development with Ruby on Rails"); book.setTitle("Mastering the BDD"); book.setPrice(25.9f); book.setNumberofpage(1029); em.persist(book); assertNotNull("ID should not be null", book.getId()); } } My persistence.xml jta-data-sourceBookstoreJNDI And exception is: May 7, 2009 11:10:37 AM org.hibernate.validator.util.Version INFO: Hibernate Validator bean-validator-3.0-JBoss-4.0.2 May 7, 2009 11:10:37 AM org.hibernate.validator.engine.resolver.DefaultTraversableResolver detectJPA INFO: Instantiated an instance of org.hibernate.validator.engine.resolver.JPATraversableResolver. [EL Info]: 2009-05-07 11:10:37.531--ServerSession(13671123)--EclipseLink, version: Eclipse Persistence Services - 2.0.0.v20091127-r5931 May 7, 2009 11:10:40 AM com.sun.enterprise.transaction.JavaEETransactionManagerSimplified initDelegates INFO: Using com.sun.enterprise.transaction.jts.JavaEETransactionManagerJTSDelegate as the delegate May 7, 2009 11:10:43 AM com.sun.enterprise.connectors.ActiveRAFactory createActiveResourceAdapter SEVERE: rardeployment.class_not_found May 7, 2009 11:10:43 AM com.sun.enterprise.connectors.ActiveRAFactory createActiveResourceAdapter SEVERE: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:104) at com.sun.enterprise.connectors.service.ResourceAdapterAdminServiceImpl.createActiveResourceAdapter(ResourceAdapterAdminServiceImpl.java:216) at com.sun.enterprise.connectors.ConnectorRuntime.createActiveResourceAdapter(ConnectorRuntime.java:352) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:106) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:569) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:110) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:94) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.entity.BookTest.setUpClass(BookTest.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:515) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1031) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:888) Caused by: java.lang.ClassNotFoundException: com.sun.gjc.spi.ResourceAdapter at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:96) ... 32 more [EL Severe]: 2009-05-07 11:10:43.937--ServerSession(13671123)--Local Exception Stack: Exception [EclipseLink-7060] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.ValidationException Exception Description: Cannot acquire data source [BookstoreJNDI]. Internal Exception: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]] at org.eclipse.persistence.exceptions.ValidationException.cannotAcquireDataSource(ValidationException.java:451) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:116) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:94) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.entity.BookTest.setUpClass(BookTest.java:31) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:515) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1031) at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:888) Caused by: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]] at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:442) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:569) at javax.naming.InitialContext.lookup(InitialContext.java:396) at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:110) ... 23 more Caused by: javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR] at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:109) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.impl.SerialContext.getObjectInstance(SerialContext.java:472) at com.sun.enterprise.naming.impl.SerialContext.lookup(SerialContext.java:437) ... 26 more Caused by: com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:104) at com.sun.enterprise.connectors.service.ResourceAdapterAdminServiceImpl.createActiveResourceAdapter(ResourceAdapterAdminServiceImpl.java:216) at com.sun.enterprise.connectors.ConnectorRuntime.createActiveResourceAdapter(ConnectorRuntime.java:352) at com.sun.enterprise.resource.naming.ConnectorObjectFactory.getObjectInstance(ConnectorObjectFactory.java:106) ... 29 more Caused by: java.lang.ClassNotFoundException: com.sun.gjc.spi.ResourceAdapter at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:276) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at com.sun.enterprise.connectors.ActiveRAFactory.createActiveResourceAdapter(ActiveRAFactory.java:96) ... 32 more Exception Description: Cannot acquire data source [BookstoreJNDI]. Internal Exception: javax.naming.NamingException: Lookup failed for 'BookstoreJNDI' in SerialContext ,orb'sInitialHost=localhost,orb'sInitialPort=3700 [Root exception is javax.naming.NamingException: Failed to look up ConnectorDescriptor from JNDI [Root exception is com.sun.appserv.connectors.internal.api.ConnectorRuntimeException: Error in creating active RAR]])

    Read the article

  • Manage Your Amazon S3 Account with CloudBerry Explorer

    - by Mysticgeek
    If you have an Amazon S3 account you’re using to backup your data, you might want an easy way to manage it. CloudBerry Explorer is a free app that runs on your desktop an provides an easy way to manage your S3 account. Installation and Setup Just download and install the application with the defaults. When the application launches you’ll be prompted to enter in your username and email to get a registration key. Or you can continue on by clicking Register later. Now you will want to set up your Amazon S3 account. Click on File \ Amazon S3 Accounts. Double-click on the New Account icon.   Next enter in your Amazon account Access and Secret keys, select SSL if you want, then click the Test Connection button. Provided everything was entered correctly, you’ll see the Connection Success screen, just close out of it. Browse and Manage files Once you have your account setup through the Explorer, you can start viewing and managing your files on S3. The left pane shows your S3 buckets and stored files, while the right side shows your local computer. This allows you to manage your files in your Amazon S3 buckets directly from your desktop! It’s very easy to use, and you can drag and drop files from your computer to the S3 account or vice versa. There is also the ability to transfer files between Amazon S3 accounts from within the explorer. Go into Tools and Content Types and you can control the file types by adding, removing, or editing them. If you end up messing something up along the lines, you can always select Reset to defaults and everything will be back to normal. There is a multiple tabbed view so you can easily keep track of your different accounts and local machine. It allows the ability to create new storage buckets directly in the Explorer. Or you can delete buckets as well… Different actions can be accessed from the toolbars or by right-clicking and selecting from the context menu. Here we see a cool option that lets you move your data inside Amazon S3. It is faster and doesn’t cost money by moving the files to your computer first, then to another account. However, if you want data moved to your local machine first, you have that option as well.   Not all features are available in the free version, and if it’s not, you’ll be prompted to purchase a license for the Pro version. We will have a comprehensive review of the Pro version in the near future.    If you ever need help with CloudBerry Explorer, go to Tools \ Diagnostics. It will run a quick diagnostics check and you can send the information to the CloudBerry team for assistance. Delete Files from Amazon S3 To delete a file from you Amazon S3 account, simply highlight the files or folder you want to get rid of then click Delete on the toolbar. You can also right-click the file and select Delete from the Context Menu. Click Yes to the confirmation dialog box… Then you can watch the progress as your files are deleted in the bottom section of the explorer. Conclusion CloudBerry Explorer free version has several neat features that will allow you easy and basic control over you Amazon S3 account. The free version may be enough for basic users, but power users will want to upgrade to the pro version, as it includes a lot more features. Using the free version allows you to get a feel for what CloudBerry Explorer has to offer, and is a good starting point. Keep in mind that Amazon S3 is introducing Reduced Redundancy Storage which will lower the price of data stored. The price drops from $0.15 per GB to only $0.10 per GB. If you’re a Windows Home Server user, check out our review of CloudBerry Online Backup 1.5 for WHS. Download CloudBerry Explorer Free for Amazon S3 Similar Articles Productive Geek Tips CloudBerry Online Backup 1.5 for Windows Home ServerReopen Closed Tabs in Internet ExplorerPreview and Purchase Ebooks with Kindle for PCTroubleshoot and Manage Addons in Internet Explorer 8Beginner Geek: Delete User Accounts in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Google TV The iPod Revolution Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12

    - by Tony Berk
    There is plenty to SEE for CRM during OpenWorld in San Francisco, September 30 - October 4! As I mentioned in my earlier post about some of the keynote sessions, Is There a Cloud Over OpenWorld?, I'm going try to highlight some key sessions to help you find the best sessions for you. Interested to find out where Oracle CRM products are headed, then find your "roadmap" session. Here are some of the sessions in the CRM Track that you might want to consider attending for products you currently own or might consider for the future. I think you'll agree, there is quite a bit of investment going on across Oracle CRM. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. General Session: Oracle Fusion CRM—Improving Sales Effectiveness, Efficiency, and Ease of Use (Session ID: GEN9674) - Oct 2, 11:45 AM - 12:45 PM. Anthony Lye, Senior VP, Oracle leads this general session focused on Oracle Fusion CRM. Oracle Fusion CRM optimizes territories, combines quota management and incentive compensation, integrates sales and marketing, and cleanses and enriches data—all within a single application platform. Oracle Fusion can be configured, changed, and extended at runtime by end users, business managers, IT, and developers. Oracle Fusion CRM can be used from the Web, from a smartphone, from Microsoft Outlook, or from an iPad. Deloitte, sponsor of the CRM Track, will also present key concepts on CRM implementations. Oracle Fusion Customer Relationship Management: Overview/Strategy/Customer Experiences/Roadmap (CON9407) - Oct 1, 3:15PM - 4:15PM. In this session, learn how Oracle Fusion CRM enables companies to create better sales plans, generate more quality leads, and achieve higher win rates and find out why customers are adopting Oracle Fusion CRM. Gain a deeper understanding of the unique capabilities only Oracle Fusion CRM provides, and learn how Oracle’s commitment to CRM innovation is driving a wide range of future enhancements. Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM - 11:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Siebel CRM Overview, Strategy, and Roadmap (CON9700) - Oct 1, 12:15PM - 1:15PM. The world’s most complete CRM solution, Oracle’s Siebel CRM helps organizations differentiate their businesses. Come to this session to learn about the Siebel product roadmap and how Oracle is committed to accelerating the pace of innovation and value for its customers on this platform. Additionally, the session covers how Siebel customers can leverage many Oracle assets such as Oracle WebCenter Sites; InQuira, RightNow, and ATG/Endeca applications, and Oracle Policy Automation in conjunction with their current Siebel investments. Oracle Fusion Social CRM Strategy and Roadmap: Future of Collaboration and Social Engagement (CON9750) - Oct 4, 11:15 AM - 12:15 PM. Social is changing the customer experience! Come find out how Oracle can help you know your customers better, encourage brand affinity, and improve collaboration within your ecosystem. This session reviews Oracle’s social media solution and shows how you can discover hidden insights buried in your enterprise and social data. Also learn how Oracle Social Network revolutionizes how enterprise users work, collaborate, and share to achieve successful outcomes. Oracle CRM On Demand Strategy and Roadmap (CON9727) - Oct 1, 10:45AM - 11:45AM. Oracle CRM On Demand is a powerful cloud-based customer relationship management solution. Come to this session to learn directly from Oracle experts about future product plans and hear how Oracle is committed to accelerating the pace of innovation and value to its customers. Knowledge Management Roadmap and Strategy (CON9776) - Oct 1, 12:15PM - 1:15PM. Learn how to harness the knowledge created as a natural byproduct of day-to-day interactions to lower costs and improve customer experience by delivering the right answer at the right time across channels. This session includes an overview of Oracle’s product roadmap and vision for knowledge management for both the Oracle RightNow and Oracle Knowledge (formerly InQuira) product families. Oracle Policy Automation Roadmap: Supercharging the Customer Experience (CON9655) - Oct 1, 12:15PM - 1:15PM. Oracle Policy Automation delivers rapid customer value by streamlining the capture, analysis, and deployment of policies across every facet of the customer experience. This session discusses recent Oracle Policy Automation enhancements for policy analytics; the latest Oracle Policy Automation Connector for Siebel; and planned new capabilities, including availability with the Oracle RightNow product line. There is much more, so stay tuned for more highlights or check out the Content Catalog and search for your areas of interest. Which session are you most interested in? Make your suggestions! But no voting for Pearl Jam or Kings of Leon. Those are after hours! 

    Read the article

  • What developer conferences are you going to this year?

    - by mbcrump
    This short list is what I consider to be the “cream-of-the-crop” in developer conferences. This is also a list of the conferences that I plan on attending in 2011. If you feel your conference is just as good, then shoot me an email at [michael[at]michaelcrump[dot]net, and if possible I will check it out.   In-Person Event Las Vegas on April 18th-22nd, 2011 Redmond on October 17th-21st, 2011 Orlando on December 5th-9th, 2011 Visual Studio Live – I attended this event in November of last year and blogged about my experience. I am also planning on going back to the Orlando session in December of this year. So what did I like the most about this event? Being able to interact one-on-one with a majority of the speakers. If you read my blog post then you will see a list of the speakers that I met up with. I also made a lot of great connections with other professional developers all over the world. They are having an event in Las Vegas on April 18th-22nd. I noticed at this event that they have added a new track on mobile. Being a big fan of mobile, I feel that this is a great move. They also have a great selection for Silverlight Developers including Billy Hollis and Rocky Lhotka. For the full lineup of conference tracks, sessions and speakers visit http://bit.ly/VSLiveTrks. If you are interested in this then you can register here by February 16th. I must add that you can save $300 bucks by getting the early-bird special.   Virtual Conference SSWUG (DBTechCon) - holds the largest virtual conference in the information technology industry. It is also special to me because they selected a majority of my Silverlight content for the April conference. No traveling fees and all of the sessions are recorded so you can watch them on-demand for $189 bucks (early-bird special). For the entire speaker list then click here. The session list has also been published. If you are interested in this then you can register here.   In-Person Event Knoxville, TN on June 3rd/4th 2011. Codestock.org – If you live in the South then you have heard of CodeStock. To my knowledge, they have only had 3 events so far and they were a huge success. It was such a success that after the last event, everyone was telling me how good it was and how much they enjoyed it. They currently have a call for speakers going on right now, so if you have sessions then be sure to submit yours. So, what makes them stand out? Well for starters Michael Neal (organizer) developed an open API so conference attendees could build their own apps for the sessions. They also encouraged their speakers to go to other sessions instead of stay in a “speaker-room”. Another cool feature is that they are uploading videos from the conference so everyone can benefit. They are currently looking for sponsorship, so help out if you can.   In-Person Event Redmond, WA on October 28/29 2011 *NOT 100% SURE AT THIS POINT* PDC 11 – OK, so the logo should be pdc11 but its not out yet. This event is located on Microsoft’s campus in Redmond, WA. It is probably one of the most well known conferences for developers to attend. One of the big perks from this event is that you typically come away with free stuff. In 2010 they gave away Windows 7 Phones. I remember years earlier they gave away laptops. This of course isn’t the only reason to go, you may get to tour the Microsoft campus. Since pdc is a huge event, you can view all the events for free. Mike Taulty created a nice Silverlight application that consumes the OData feed. You can download it here. If everything goes as planned, I will be at all of these events. If you plan on going then send me a tweet and we will do lunch or dinner. I love meeting new developers and talking .net.  Subscribe to my feed

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • How to get the MSOnlineBackup Cmdlets?

    - by gregpakes
    I am trying to manage the Azure Online Backup from PowerShell. There is a set of Cmdlets called MSOnlineBackup. See technet.microsoft.com/en-us/library/dn249523.aspx. When I try to run: Import-Module MSOnlineBackup I get: The specified module 'MSOnlineBackup' was not loaded because no valid module file was found in any module directory. On the technet page it states that it is included in 4.0.4.0, If I run: $PSVersionTable.PSVersion It returns: Major: 4 Minor: 0 Build: -1 Revision: -1 I am running Windows 8.1. As you can probably tell I am no Powershell expert. I have also tried installed the Azure Backup Agent, but it says it needs a Server operating system. Can anyone tell me how I can get the MSOnlineBackup module on my machine so I can start automating Windows Azure Backup?

    Read the article

  • Can't re-install OS on Asus Eee PC 1018P

    - by cornerback84
    My friend has an Asus Eee PC 1018P. It has no CD/DVD drive (neither does he have a USB CD/DVD drive). The OS wasn't working fine, so we decided to restore the system using the provided OS backup from HD. But mid way through the installation it was interrupted and the computer was restarted (not a hardware or software issue – we did it). Now we cannot run the backup and we also tried to install Windows 7 through USB, but as soon as we start to install the OS, it says that some device driver is missing. It turns out that the OS needs a USB device driver to continue. It has USB 3.0 – maybe that's why the OS needs the driver. I tried disabling 3.0 and enabling 2.0 but then it does not boot from USB drive for some reason. We are stuck with this. The backup doesn't run and when booting from USB, it says that it needs a device driver. Anyone has any idea what we could do?

    Read the article

  • Can’t start MySQL on Ubuntu 12.04 after restored from innobackupex

    - by RAH
    I can’t start MySQL on Ubuntu 12.04 after restored backup from innobackupex. Before I tried to restore the db from backup I moved the datadir and got the same problem. With help from google I fixed the problem and got MySQL started. Ready to set up my new slave, I restored the backup via innobackupex –-copy-path /db/mysql, and now I can’t start MySQL. I am sure of the following: In my.cnf the datadir = /db/mysql The new datadir is chown mysql:mysql. The /etc/apparmor.d/usr.sbin.mysqld contains: #/var/lib/mysql/ r, #/var/lib/mysql/** rwk, /db/mysql r, /db/mysql** rwk, AND /var/run/mysqld/mysqld.pid w, /var/run/mysqld/mysqld.sock w, /run/mysqld/mysqld.pid w, /run/mysqld/mysqld.sock w, /var/log/syslog gives me the following info: http://pastebin.com/1TQGsaBH What am I missing? Thanks.

    Read the article

  • Unexpected server restart - Windows 2003 SP2 fully patched

    - by PCTech
    I'm having problems with a server that has been restarting itself randomly for the past 3 months. The server is windows 2003 with SP2 Domain Controller and it is fully patched. I have seen the following errors in event log: Source: USER32 Category: None Type: Information Event ID: 1074 User: Domain\Administrator The process winlogon.exe has initiated the restart of computer (server name) on behalf of user domainname\Administrator for the following reason: No title for this reason could be found Reason Code: 0x840000ff Shutdown Type: restart I have ran out of ideas as to what might be causing this issue. The system is clean and not infected. There are no scheduled tasks responsible for the restart either. I'm considering moving the backup (Backup Exec 12.5) to a different server but I'm almost certain that this is not the issue as the restart times vary and do not match the scheduled backup jobs. Any suggestions to help me resolve this issue would be appreciated, thanks.

    Read the article

  • Nagios Terminal Services check?

    - by jldugger
    Most of our servers are licensed for 2 concurrent remote desktop sessions. This is fine, so long as everyone does their administrative task and logs off, but some people accidentally close sessions (disconnect but remain logged in) instead. I know that you can force someone off with the right Admin tools, but it's a bit ugly and may hurt productivity or maybe even the server(?). I was thinking that a nightly Nagios check of remote sessions available nagging people would help enforce build discipline on the subject. Can anyone recommend a service check that can monitor terminal service availability?

    Read the article

  • restore content database in sharepoint server 2007

    - by Boris
    I have a site collection set up at web app running at port 80. I have made the backup of the site collection content db using stsadm.exe tool. Now, I want to restore that backup as a new content db of a different site collection - the one set up at web app running at port 500. I have done the following: Created a backup Created new web app at port 500 (I did not create a site collection for this web app) I have removed the content db of that new web app using Central Administration I have run the stsadm.exe -o addcontentdb -url webapp-at-port-500 -databasename Command is successfully completed, however when I check the Content Database page for that web app, it says that the Number of Sites is 0! Also, when I try to open http://webapp-at-port-500, I get the error saying that the webpage cannot be found. Could anyone please help me, it's driving me crazy. Thanks.

    Read the article

  • Issues with Rsync on a NAS

    - by Daniel Fischer
    I'm trying to rsync a few external hard drives over to my new Nas DS412+ but I'm noticing it's stupid slow. I'm trying it via mounting the backup folder via afb on a Mac. I was told this may be the wrong way to do it. I recently just turned on "network backup" on the Synology and am now running rsync over ssh like: rsync -ar --progress . admin@localip:/backup/path Is this the right way to do it now? Will it be faster? Is there something else I can do to make it faster? Edit: I'm getting a ton of: "failed to set permissions" "failed to set times" now that I run it. What do I do?

    Read the article

  • Program shortcuts disappearing in Windows Mobile 2003, any way to get them back?

    - by Carlisle White
    I have a WM2003 device with some programs installed on it and a full backup created and saved to a SD card. If the device runs out of charge for some time (or the battery removed) everything is reseted back to defaults, so the custom programs and configs are gone. When this happens I used to restore the full backup to put everything back to normal again. But I've recently installed TomTom Navigator 7 and for some reason, its shortcut in the "Programs" section is not saved when creating a full backup (with the eBackup app provided) and the installation doesn't create a shortcut in the main screen (as version 6 used to do). Is there any way to make this shortcut persistent? Is there any way to create custom shortcuts in the programs section or in the main screen (preferably)? Thank you very much for your help, anything is welcomed.

    Read the article

  • crontab not executing all lines

    - by kiasecto
    I have a sudo crontab like this to sync time: # m h dom mow dow command 0 6 * * * ntpdate 10.3.3.3 >> /var/mylog/ntp.log 0 7 * * * /var/mylog/backup.sh >> /var/mylog/backup.log The problem I am having is that the first line (ntpdate) never seems to execute. If I run it manually with sudo that line works. cron does run the backup.sh at the 7, but it never executes then ntp sync at 6. The syslog doesn't seem to show anything. System is Ubuntu 10.04 LTS.

    Read the article

  • Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table

    - by Imagineer
    I'm getting the above mentioned error when backing up with ZRM, which is using mysqldump for backup. mysqldump --opt --extended-insert --single-transaction --create-options --default-character-set=utf8 --user=" " -p --all-databases "/nfs/backup/mysql01/dailyrun/20091216043001/backup.sql" mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table TICKET_ATTACHMENT at row: 2286 I have increased the size for 'max_allowed_packet' to be 1G in /etc/my.cnf which is the server setting and for the client side setting I've set it by running this command: mysql -u -p --max_allowed_packet=1G And I have verified that on the client and server side they are of the same value. This is to check the client side value according to this forum posting http://forums.mysql.com/read.php?35,75794,261640 mysql SELECT @@MAX_ALLOWED_PACKET - ; +----------------------+ | @@MAX_ALLOWED_PACKET | +----------------------+ | 1073741824 | +----------------------+ 1 row in set (0.00 sec) And this is the check the server value setting. mysql SHOW VARIABLES | max_allowed_packet | 1073741824 | I have ran out of ideas, and tried searching within expert exchange and googling for solutions but so far none has worked. Reference http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html Anyone please advise, thank you.

    Read the article

  • asus eee pc 1018p os installation

    - by cornerback84
    My friend has an asus eee pc 1018p. It has no cd/dvd drive (neither do he have a usb cd/dvd drive). The OS wasn't working fine, so we decided to restore the system using the provided OS backup from HD. But midway to installation it was interrupted and the computer was restarted (not h/w or s/w issue but we did it). Now we cannot run the backup and we also tried to install windows 7 through USB, but as soon as we start to install the OS, it says that some device driver is missing. It turns out that the OS needs usb device driver to continue. It has USB 3.0 maybe thats why the OS needs the driver. I tried disabling 3.0 and enabling 2.0 but then it does not boot from USB drive for some reason. We are stuck with this. The backup doesn't runs and when booting from USB it says that it needs device driver. Anyone has any idea what we should do?

    Read the article

  • SQL Maintenance Cleanup Task 'Success' But not deleting files

    - by Seph
    I have a maintenance plan setup for a databases on a server. As part of the backup is a Maintenance Cleanup Task. SQL Version 2008 The task that 'succeeds' is setup as: Delete backup files Correct folder (same address as the backup task) File extension: bak (NOT .bak) Delete files older than: 20 Hour(s) I have other similar cleanup tasks that occur in the same maintenance plan which work fine. This plan has worked fine in the past, I just noticed that last night it reported 'success' and the rest of the plan continued, however the file from 2 days ago still remains. I have checked similar questions such as this question, and this is not the case as my maintenance task worked fine two days ago and for the past several weeks:

    Read the article

  • Simultaneous read/write to RAID array slows server to a crawl

    - by Jeff Leyser
    Fairly beefy NFS/SMB server (32GB RAM, 2 Xeon quad cores) with LSI MegaRAID 8888ELP controlling 12 drives configured into 3 different arrays. 5 2TB drives are grouped into a RAID 6 array. As expected, write performance to the array is slow. However, sustained, simultaneous read/write to the array (wether through NFS or done locally) seems to practically block any other access to anything else on the controller. For example, if I do: cp /home/joe/BigFile /home/joe/BigFileCopy where BigFile is 20G, then even a simple ls /home/jane will take many 10s of seconds to complete. In addition, an ls /backup will also take many tens of seconds, even though /backup is a different array on the same controller. As soon as the cp is done, everything is back to normal. cp /home/joe/BigFile /backup/BigFile does not exhibit this behavior. It's only when doing read/write to the same array.

    Read the article

  • How do I make more available space on a Time Machine hard disk?

    - by Daryl Spitzer
    I upgraded my MacBook Pro to Snow Leopard, and made some other changes that have caused my next Time Machine backup to be quite large. Previous to the upgrade my backup drive had filled up, so Time Machine was deleting old backups to make room for new ones. When Time Machine started the first backup after the upgrade, it displayed a message that it was freeing up space. But it wasn't able to free up enough: (The disk has 320 GB capacity.) How can I free up more space on the disk (without reformatting or deleting all the existing backups)? I don't want to recklessly delete files and take the risk of confusing Time Machine.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >