Search Results

Search found 761 results on 31 pages for 'tail'.

Page 15/31 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only The hell you are: [root@localhost ~]# mount -o remount,rw /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • s3fs Input/output error

    - by shadow_of__soul
    i'm trying to set up a backup system with s3fs and the amazon s3 service. i followed this 2 guides: http://qugstart.com/blog/linux/how-to-mount-an-amazon-s3-bucket-as-virtual-drive-on-centos-5-2/ http://blog.eberly.org/2008/10/27/how-i-automated-my-backups-to-amazon-s3-using-rsync/ anyway making a tail to the /var/log/messages i get: Aug 28 13:37:46 server s3fs:###response=403 i already tried creating the authentication file on /etc/passwd-s3fs and setting there the access and private key, passing it trough the command line, i checked several times the credentials and i used it with s3fox, and is working. i also have set the time of the machine (with the date command) to be the same as the amazon S3 servers (i got the time of the S3 server uploading a file with the file manager) not only rsync don't work, commands like ls or cp in the /mnt/s3 didn't work also. any help of how i can solve/debug this? Regards, Shadow.

    Read the article

  • xf86OpenConsole: Cannot find a free VT: Invalid argument

    - by Oliver Seeliger
    I'v set up an Ubuntu 12.04 from the precreated OpenVZ template. The host system is configured as follows: # $ cat /etc/issue Debian GNU/Linux 6.0 # $ uname -a Linux openvz-02 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 GNU/Linux # $ apt-cache showpkg proxmox-ve-2.6.32 Package: proxmox-ve-2.6.32 # $ tail -n 3 /etc/apt/sources.list # PVE packages provided by proxmox.com deb http://download.proxmox.com/debian squeeze pve For a software project I need a minimal xserver and followed the instructions at https://help.ubuntu.com/community/ServerGUI. I simply installed the package xorg (xorg 1:7.6+7ubuntu7.1). Now when I 'startx' I get an error message Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument The complete output of startx # startx X.Org X Server 1.11.3 Release Date: 2011-12-16 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu Current Operating System: Linux www 2.6.32-16-pve #1 SMP Fri Nov 9 11:42:51 CET 2012 x86_64 Kernel command line: quiet Build Date: 29 August 2012 12:12:33AM xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 20 08:46:04 2012 (==) Using system config directory "/usr/share/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot find a free VT: Invalid argument Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • Can't start mysql - mysql respawning too fast, stopped

    - by Tom
    Today I did a fresh install of ubuntu 12.04 and went about setting up my local development environment. I installed mysql and edited /etc/mysql/my.cnf to optimise InnoDB but when I try to restart mysql, it fails with a error: [20:53][tom@Pochama:/var/www/website] (master) $ sudo service mysql restart start: Job failed to start The syslog reveals there is a problem with the init script: > tail -f /var/log/syslog Apr 28 21:17:46 Pochama kernel: [11840.884524] type=1400 audit(1335644266.033:184): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=760 comm="apparmor_parser" Apr 28 21:17:47 Pochama kernel: [11842.603773] init: mysql main process (764) terminated with status 7 Apr 28 21:17:47 Pochama kernel: [11842.603841] init: mysql main process ended, respawning Apr 28 21:17:48 Pochama kernel: [11842.932462] init: mysql post-start process (765) terminated with status 1 Apr 28 21:17:48 Pochama kernel: [11842.950393] type=1400 audit(1335644268.101:185): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=811 comm="apparmor_parser" Apr 28 21:17:49 Pochama kernel: [11844.656598] init: mysql main process (815) terminated with status 7 Apr 28 21:17:49 Pochama kernel: [11844.656665] init: mysql main process ended, respawning Apr 28 21:17:50 Pochama kernel: [11845.004435] init: mysql post-start process (816) terminated with status 1 Apr 28 21:17:50 Pochama kernel: [11845.021777] type=1400 audit(1335644270.173:186): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=865 comm="apparmor_parser" Apr 28 21:17:51 Pochama kernel: [11846.721982] init: mysql main process (871) terminated with status 7 Apr 28 21:17:51 Pochama kernel: [11846.722001] init: mysql respawning too fast, stopped Any ideas? Things I tried already: I googled and found a Ubuntu bug with apparmor (https://bugs.launchpad.net/ubuntu/+source/mysql-5.5/+bug/970366), I changed apparmor from enforce mode to complain mode: sudo apt-get install apparmor-utils sudo aa-complain /usr/sbin/mysqld sudo /etc/init.d/apparmor reload but it didn't help. I still can't start mysql. I also thought the issue may be because the InnoDB logfiles were a different size than mysql was expecting. I removed the innodb log files before restarting using: sudo mv /var/lib/mysql/ib_logfile* /tmp. No luck though. Workaround: I re-installed 12.04, made sure not to touch /etc/mysql/my.cnf in any way. Mysql is working so I can get on with what I need to do. But I will need to edit it at some point - Hopefully I'll have figured out a solution, or this question will have been answered by that point...

    Read the article

  • Queued Loadtest to remove Concurrency issues using Shared Data Service in OpenScript

    - by stefan.thieme(at)oracle.com
    Queued Processing to remove Concurrency issues in Loadtest ScriptsSome scripts act on information returned by the server, e.g. act on first item in the returned list of pending tasks/actions. This may lead to concurrency issues if the virtual users simulated in a load test scenario are not synchronized in some way.As the load test cases should be carried out in a comparable and straight forward manner simply cancel a transaction in case a collision occurs is clearly not an option. In case you increase the number of virtual users this approach would lead to a high number of requests for the early steps in your transaction (e.g. login, retrieve list of action points, assign an action point to the virtual user) but later steps would be rarely visited successfully or at all, depending on the application logic.A way to tackle this problem is to enqueue the virtual users in a Shared Data Service queue. Only the first virtual user in this queue will be allowed to carry out the critical steps (retrieve list of action points, assign an action point to the virtual user) in your transaction at any one time.Once a virtual user has passed the critical path it will dequeue himself from the head of the queue and continue with his actions. This does theoretically allow virtual users to run in parallel all steps of the transaction which are not part of the critical path.In practice it has been seen this is rarely the case, though it does not allow adding more than N users to perform a transaction without causing delays due to virtual users waiting in the queue. N being the time of the total transaction divided by the sum of the time of all critical steps in this transaction.While this problem can be circumvented by allowing multiple queues to act on individual segments of the list of actions, e.g. per country filter, ends with 0..9 filter, etc.This would require additional handling of these additional queues of slots for the virtual users at the head of the queue in order to maintain the mutually exclusive access to the first element in the list returned by the server at any one time of the load test. Such an improved handling of multiple queues and/or multiple slots is above the subject of this paper.Shared Data Services Pre-RequisitesStart WebLogic Server to host Shared Data ServicesYou will have to make sure that your WebLogic server is installed and started. Shared Data Services may not work if you installed only the minimal installation package for OpenScript. If however you installed the default package including OLT and OTM, you may follow the instructions below to start and verify WebLogic installation.To start the WebLogic Server deployed underneath of Oracle Load Testing and/or Oracle Test Manager you can go to your Start menu, Oracle Application Testing Suite and select the Restart Oracle Application Testing Suite Application Service entry from the Tools submenu.To verify the service has been started you can run the Microsoft Management Console for Services by Selecting Run from the Start Menu and entering services.msc. Look for the entry that reads Oracle Application Testing Suite Application Service, once it has changed it status from Starting to Started you can proceed to verify the login. Please note that this may take several minutes, I would say up to 10 minutes depending on the strength of your CPU horse-power.Verify WebLogic Server user credentialsYou will have to make sure that your WebLogic Server is installed and started. Next open the Oracle WebLogic Server Adminstration Console on http://localhost:8088/console.It may take a while until the application is deployed and started. It may display the following until the Administration Console has been deployed on the fly.Afterwards you can login using the username oats and the password that you selected during install time for your Application Testing Suite administrative purposes.This will bring up the Home page of you WebLogic Server. You have actually verified that you are able to login with these credentials already. However if you want to check the details, navigate to Security Realms, myrealm, Users and Groups tab.Here you could add users to your WebLogic Server which could be used in the later steps. Details on the Groups required for such a custom user to work are exceeding this quick overview and have to be selected with the WebLogic Server Adminstration Guide in mind.Shared Data Services pre-requisites for Load testingOpenScript Preferences have to be set to enable Encryption and provide a default Shared Data Service Connection for Playback.These are pre-requisites you want to use for load testing with Shared Data Services.Please note that the usage of the Connection Parameters (individual directive in the script) for Shared Data Services did not playback reliably in the current version 9.20.0370 of Oracle Load Testing (OLT) and encryption of credentials still seemed to be mandatory as well.General Encryption settingsSelect OpenScript Preferences from the View menu and navigate to the General, Encryption entry in the tree on the left. Select the Encrypt script data option from the list and enter the same password that you used for securing your WebLogic Server Administration Console.Enable global shared data access credentialsSelect OpenScript Preferences from the View menu and navigate to the Playback, Shared Data entry in the tree on the left. Enable the global shared data access credentials and enter the Address, User name and Password determined for your WebLogic Server to host Shared Data Services.Please note, that you may want to replace the localhost in Address with the hosts realname in case you plan to run load tests with Loadtest Agents running on remote systems.Queued Processing of TransactionsEnable Shared Data Services Module in Script PropertiesThe Shared Data Services Module has to be enabled for each Script that wants to employ the Shared Data Service Queue functionality in OpenScript. It can be enabled under the Script menu selecting Script Properties. On the Script Properties Dialog select the Modules section and check Shared Data to enable Shared Data Service Module for your script. Checking the Shared Data Services option will effectively add a line to your script code that adds the sharedData ScriptService to your script class of IteratingVUserScript.@ScriptService oracle.oats.scripting.modules.sharedData.api.SharedDataService sharedData;Record your scriptRecord your script as usual and then add the following things for Queue handling in the Initialize code block, before the first step and after the last step of your critical path and in the Finalize code block.The java code to be added at individual locations is explained in the following sections in full detail.Create a Shared Data Queue in InitializeTo create a Shared Data Queue go to the Java view of your script and enter the following statements to the initialize() code block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);This will create an instantiation of the Shared Data Queue object named queueA which is maintained for upto 120 minutes.If you want to use the code for multiple scripts, make sure to use a different queue name for each one here and in the subsequent steps. You may even consider to use a dynamic queueName based on filters of your result list being concurrently accessed.Prepare a unique id for each IterationIn order to keep track of individual virtual users in our queue we need to create a unique identifier from the virtual user id and the used username right after retrieving the next record from our databank file.getDatabank("Usernames").getNextDatabankRecord();getVariables().set("usernameValue1","VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}}");String usernameValue = getVariables().get("usernameValue1");info("Now running virtual user " + usernameValue);As you can see from the above code block, we have set the OpenScript variable usernameValue1 to VU_{{@vuid}}_{{@iterationnum}}_{{db.Usernames.Username}}_{{@timestamp}}_{{@random(10000)}} which is a concatenation of the virtual user id and the iterationnumber for general uniqueness; as well as the username from our databank, the timestamp and a random number for making it further unique and ease spotting of errors.Not all of these fields are actually required to make it really unique, but adding the queue name may also be considered to help troubleshoot multiple queues.The value is then retrieved with the getVariables.get() method call and assigned to the usernameValue String used throughout the script.Please note that moving the getDatabank("Usernames").getNextDatabankRecord(); call to the initialize block was later considered to remove concurrency of multiple virtual users running with the same userid and therefor accessing the same "My Inbox" in step 6. This will effectively give each virtual user a userid from the databank file. Make sure you have enough userids to remove this second hurdle.Enqueue and attend Queue before Critical PathTo maintain the right order of virtual users being allowed into the critical path of the transaction the following pseudo step has to be added in front of the first critical step. In the case of this example this is right in front of the step where we retrieve the list of actions from which we select the first to be assigned to us.beginStep("[0] Waiting in the Queue", 0);{info("Enqueued virtual user " + usernameValue + " at the end of queueA");sharedData.offerLast("queueA", usernameValue);info("Wait until the user is the first in queueA");String queueValue1 = null;do {// we wait for at least 0.7 seconds before we check the head of the// queue. This is the time it takes one user to move through the// critical path, i.e. pass steps [5] Enter country and [6] Assign// to meThread.sleep(700);queueValue1 = (String) sharedData.peekFirst("queueA");info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );info("The current user is '"+ usernameValue + "' " + usernameValue.getClass() + " length " + usernameValue.length() + ": indexOf " + usernameValue.indexOf(queueValue1) + " equals " + usernameValue.equals(queueValue1) );} while ( queueValue1.indexOf(usernameValue) < 0 );info("Now the user is the first in queueA");}endStep();This will enqueue the username to the tail of our Queue. It will will wait for at least 700 milliseconds, the time it takes for one user to exit the critical path and then compare the head of our queue with it's username. This last step will be repeated while the two are not equal (indexOf less than zero). If they are equal the indexOf will yield a value of zero or larger and we will perform the critical steps.Dequeue after Critical PathAfter the virtual user has left the critical path and complete its last step the following code block needs to dequeue the virtual user. In the case of our example this is right after the action has been actually assigned to the virtual user. This will allow the next virtual user to retrieve the list of actions still available and in turn let him make his selection/assignment.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");The current user is removed from the head of the queue. The next one will now be able to match his username against the head of the queue.Clear and Destroy Queue for FinishWhen the script has completed, it should clear and destroy the queue. This code block can be put in the finish block of your script and/or in a separate script in order to clear and remove the queue in case you have spotted an error or want to reset the queue for some reason.info("Clear queueA");sharedData.clearQueue("queueA");info("Destroy queueA");sharedData.destroyQueue("queueA");The users waiting in queueA are cleared and the queue is destroyed. If you have scripts still executing they will be caught in a loop.I found it better to maintain a separate Reset Queue script which contained only the following code in the initialize() block. I use to call this script to make sure the queue is cleared in between multiple Loadtest runs. This script could also even be added as the first in a larger scenario, which would execute it only once at very start of the Loadtest and make sure the queues do not contain any stale entries.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);info("Clear queueA");sharedData.clearQueue("queueA");This will create a Shared Data Queue instance of queueA and clear all entries from this queue.Monitoring QueueWhile creating the scripts it was useful to monitor the contents, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will continuously monitor the first element of the Queue and write an informational message with the current username Value to the Result window.info("Monitor the first users in queueA");String queueValue1 = null;do {queueValue1 = (String) sharedData.peekFirst("queueA");if (queueValue1 != null)info("The first user in queueA is currently: '" + queueValue1 + "' " + queueValue1.getClass() + " length " + queueValue1.length() );} while ( true );This script can be run from OpenScript parallel to a loadtest performed by the Oracle Load Test.However it is not recommend to run this in a production loadtest as the performance impact is unknown. Accessing the Queue's head with the peekFirst() method has been reported with about 2 seconds response time by both OpenScript and OTL. It is advised to log a Service Request to see if this could be lowered in future releases of Application Testing Suite, as the pollFirst() and even offerLast() writing to the tail of the Queue usually returned after an average 0.1 seconds.Debugging QueueWhile debugging the scripts the following was useful to remove single entries from its head, i.e. the current first user in the Queue. The following code block will make sure the Shared Data Queue is accessible in the initialize() block.info("Create queueA with life time of 120 minutes");sharedData.createQueue("queueA", 120);In the run() block the following code will remove the first element of the Queue and write an informational message with the current username Value to the Result window.info("Get and remove the current user from the head of queueA");String pollValue1 = (String) sharedData.pollFirst("queueA");info("The first user in queueA was currently: '" + pollValue1 + "' " + pollValue1.getClass() + " length " + pollValue1.length() );ReferencesOracle Functional Testing OpenScript User's Guide Version 9.20 [E15488-05]Chapter 17 Using the Shared Data Modulehttp://download.oracle.com/otn/nt/apptesting/oats-docs-9.21.0030.zipOracle Fusion Middleware Oracle WebLogic Server Administration Console Online Help 11g Release 1 (10.3.4) [E13952-04]Administration Console Online Help - Manage users and groupshttp://download.oracle.com/docs/cd/E17904_01/apirefs.1111/e13952/taskhelp/security/ManageUsersAndGroups.htm

    Read the article

  • Getting error "No address associated with hostname: mod_unique_id: unable to find IPv4 address of "z

    - by Eedoh
    Hello. I'm trying to set up video surveillance system using ip cameras and zonealarm on Arch Linux. I set up fixed ip address, I've managed to get streams from cameras, etc. However, after restart of the machine, I can not start apache again. I checked configuration of rc.conf, and saw that static ip configuration has been deleted, and also secondary nameserver in resolv.conf. Tried to re-write these with correct parameters, but now with no effect. This is tail of my /var/log/httpd/error_log file, after /etc/rc.d/httpd restart attempt [Fri Jan 29 04:20:45 2010] [alert] (EAI 5)No address associated with hostname: mod_unique_id: unable to find IPv4 address of "zmhost" Configuration failed Anybody has an idea how could I fix this??

    Read the article

  • Running a scheduled task as SYSTEM with console window open

    - by raoulsson
    I am auto creating scheduled tasks with this line within a batch windows script: schtasks /Create /RU SYSTEM /RP SYSTEM /TN startup-task-%%i /TR %SPEEDWAY_DIR%\%TARGET_DIR%%%i\%STARTUPFILE% /SC HOURLY /MO 1 /ST 17:%%i1:00 I wanted to avoid using specific user credentials and thus decided to use SYSTEM. Now, when checking in the taskmanagers process list or, even better, directly with the C:\> schtasks command itself, all is working well, the tasks are running as intended. However in this particular case I would like to have an open console window where I can see the log flying by. I know I could use C:\> tail -f thelogfile.log if I installed e.g. cygwin (on all machines) or some proprietary tools like Baretail on Windows. But since I only switch to these machines in case of trouble, I would prefer to start the scheduled task in such a way that every user immediately sees the log. Any chance? Thanks!

    Read the article

  • Apache error_log repeated attempts to access forum.php

    - by bMon
    About every two seconds I am getting: [Sat Feb 19 19:00:01 2011] [error] [client 69.239.204.217] script '/var/www /html/forum.php' not found or unable to stat [Sat Feb 19 19:00:04 2011] [error] [client 69.239.204.217] File does not exist: /var/www/html/404.shtml ..in my /var/log/httpd/error_log file. Sometimes the request will be for forum_asp.php. I'm assuming its a bot trying to access insecure forum files, but I'm not so sure since it appears each is a unique IP and not just a few rouge IPs hitting it consecutively. And whois results of the ip's aren't all the classic ISP in Russia or China, they are more end user address (comcast, etc). Any insight into whats going on here would be appreciated. Also, any techniques people use to do a "live monitor" of web traffic would be appreciated. Right now I'm doing a: tail -f error_log Thanks.

    Read the article

  • mount network drive

    - by CaptnLenz
    since i updated my ubuntu to natty narwhal(from 10.04), my mount script doesn't work anymore. The scripts mounts a folder from a NAS (WD mybookworld) in the local network to a folder in my home folder. script looked like that: #!/bin/bash sudo mount //192.168.2.222/Public/Shared\ Music/ /home/simon/Musik/ error: mount: wrong fs type, bad option, bad superblock on //192.168.2.222/Public/Shared Music/, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program) Manchmal liefert das Syslog wertvolle Informationen – versuchen Sie dmesg | tail oder so now, because the script doesn't work anymore i decided to add the mount-process to my fstab, because the network drive should be mounted on every startup. My fstab entry looks like this: //192.168.2.222/Public/Shared\ Music/ /home/simon/Musik cifs credentials=/home/simon/.smbcredentials 0 0 But it doesn't work, too. I get a message during the startup process, that Musik couldn't be mounted. Are there any log files i can check for errors? The system is a fresh installed 11.04. Greetings

    Read the article

  • Nginx with postfix not sending mail - from address appearing wrong

    - by Adripants
    I am using a php form to send email. The script reports success, but the mail never arrives. The tail of the mail log shows: Nov 22 01:24:25 contra postfix/pickup[1195]: 0CC1B119A53: uid=100 from=<nginx> Nov 22 01:24:25 contra postfix/cleanup[1320]: 0CC1B119A53: message-id=<[email protected]> Nov 22 01:24:25 contra postfix/qmgr[1196]: 0CC1B119A53: from=<[email protected]>, size=363, nrcpt=1 (queue active) Just wondering where this from address is coming from and if thats why mails aren't arriving.

    Read the article

  • How can I transfer files to a Kindle Fire with a Micro-USB cable?

    - by Jeff
    I'm running Ubuntu 11.10, and when I connect my Kindle Fire to my computer via micro usb, it is not recognized automatically. Other usb devices, such as my ipod and digital camera, are recognized just fine. It does not appear to be a usb power issue, since the Kindle Fire wakes up from sleeping when it is plugged in. I never get the message on the Kindle telling me it is ready to accept files from the computer, though. Here are the last 15 lines of dmesg after plugging the kindle in: jeff@prime:~$ dmesg | tail -n 15 [45918.269671] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [45929.072149] wlan0: no IPv6 routers present [46743.224217] usb 1-1: new high speed USB device number 5 using ehci_hcd [46743.364623] scsi8 : usb-storage 1-1:1.0 [46744.366102] scsi 8:0:0:0: Direct-Access Amazon Kindle 0001 PQ: 0 ANSI: 2 [46744.366356] scsi: killing requests for dead queue [46744.372494] scsi: killing requests for dead queue [46744.384510] scsi: killing requests for dead queue [46744.392348] scsi: killing requests for dead queue [46744.392731] scsi: killing requests for dead queue [46744.396853] scsi: killing requests for dead queue [46744.397214] scsi: killing requests for dead queue [46744.400795] scsi: killing requests for dead queue [46744.401589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [46744.407520] sd 8:0:0:0: [sdb] Attached SCSI removable disk And here are my mounted filesystems: jeff@prime:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 298594984 174663712 108763480 62% / udev 1407684 4 1407680 1% /dev tmpfs 566924 896 566028 1% /run none 5120 0 5120 0% /run/lock none 1417308 300 1417008 1% /run/shm /home/jeff/.Private 298594984 174663712 108763480 62% /home/jeff I should note that, since I got Dropbox working on my Kindle, the usb is no longer strictly necessary, but as a matter of principle I'd love to get it working.

    Read the article

  • v4l - capture and watch at the same time

    - by John Barrett
    Capturing v4l and line-in audio using mencoder works very well, but I would like to record real-time gameplay video from consoles plugged into the video card. I've used xawtv for this (Works quite well, can preview and record in real time), but when I enable any deinterlacing or aspect ration options the video fails to record. I have to record raw and re-encode the video with the appropriate filters later to get something workable. Other things I have tried: tvtime with xvidcap and jack audio capture - xvidcap drops frames and muxing the audio is impossible as it will go out of sync (I have not found muxer options that work to force a correct frame rate) mencoder capture to file, attempt to pipe tail of file to mplayer... mencoder works great, piping the file is far too heavy to attempt gameplay. Soooo, v4l capture and preview simultaneously, recommendations?

    Read the article

  • Trying to get MythTV working in Kubuntu 10.10

    - by user4109
    I'm trying to get MythTV working in Kubuntu. Unfortunately I've got the following problem: If I fire up the MythTV Frontend and select "Watch TV" a "Please wait..." label appears and after a while the screen falls back to the home screen. tail -f /var/log/mythtv/mythfrontend.log prints out the following: 2010-10-14 19:22:18.809 MythContext: Connecting to backend server: 127.0.0.1:6543 (try 1 of 1) 2010-10-14 19:22:18.811 Using protocol version 23056 2010-10-14 19:22:22.641 TV: Attempting to change from None to WatchingLiveTV 2010-10-14 19:22:22.641 MythContext: Connecting to backend server: 127.0.0.1:6543 (try 1 of 1) 2010-10-14 19:22:22.642 Using protocol version 23056 2010-10-14 19:22:22.715 Spawning LiveTV Recorder -- begin 2010-10-14 19:22:26.563 Spawning LiveTV Recorder -- end 2010-10-14 19:22:26.565 ProgramInfo(): Updated pathname '':'' -> '1005_20101014192226.mpg' 2010-10-14 19:22:26.569 We have a playbackURL(/var/lib/mythtv/livetv/1005_20101014192226.mpg) & cardtype(MPEG) 2010-10-14 19:22:33.070 RingBuf(/var/lib/mythtv/livetv/1005_20101014192226.mpg): Invalid file (fd -1) when opening '/var/lib/mythtv/livetv/1005_20101014192226.mpg'. 2010-10-14 19:22:33.072 We have a RingBuffer Then there is a whole bunch of those... 2010-10-14 19:22:33.186 RingBuf(/var/lib/mythtv/livetv/1005_20101014192226.mpg) error: Invalid file descriptor in 'safe_read()' ... before it falls back to the main menu. I've got a MSI TV@Anywhere Plus Tuner Card (Philips Semiconductors SAA7131/SAA7133/SAA7135 Video Broadcast Decoder). Any idea what could be the problem?

    Read the article

  • Automatic desktop/work environment setup

    - by Alex
    I have this strange thing I am trying to do, so before I jump into it I was curious if someone knows about existing solution or maybe have an advice as far as implementation. I run a small software company and as it happens I often do very different type of work. When I do coding for Java project I need Eclipse running and maybe VM with something like ActiveMQ server or whatever, plus terminals to tail -F log files specific to the application, etc. When I do something like weekly progress review with my team I need a few browser windows open and a gedit to take notes and so on. Depending on the type of work I am doing I generally have all of the related apps open in multiple different Workspaces. So in the example above Eclipse would be open in Workspace 1, terminals would be sharing Workspace 2 and so on. What I am trying to do is to automate opening of all these applications, positinoning them on the screen and assigning them to proper Workspaces. My current idea consists of having a Shell script that launches specific apps depending on what type of work I am about to start doing. Is there anything to aid this type of automation? Or is my only option is just a shell scripting at this point? My current system is Ubuntu 10.04

    Read the article

  • Apache error "No address associated with hostname" on Arch Linux (ZMLarch)

    - by Eedoh
    I'm trying to set up video surveillance system using IP cameras and ZoneAlarm on Arch Linux. I set up fixed IP address, I've managed to get streams from cameras, etc. However, after restart of the machine, I cannot start Apache again. I checked configuration of rc.conf, and saw that static IP configuration has been deleted, and also secondary nameserver in resolv.conf. Tried to re-write these with correct parameters, but now with no effect. This is tail of my /var/log/httpd/error_log file, after /etc/rc.d/httpd restart attempt [Fri Jan 29 04:20:45 2010] [alert] (EAI 5) No address associated with hostname: mod_unique_id: unable to find IPv4 address of "zmhost" Configuration failed Anybody have an idea on how could I fix this?

    Read the article

  • Does not recognize usb sticks and drives

    - by Peter
    When connecting any usb stick to my thinkpad ubuntu 10.10 does not recognize them. I don't see anything on the desktop. the output of "dmesg | tail -n10" gives me: [ 1965.696388] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1965.884537] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.072503] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.260349] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.506227] usb 1-1: new high speed USB device using ehci_hcd and address 9 [ 1966.572375] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.760379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1966.948358] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.136335] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 1967.325423] hub 1-0:1.0: unable to enumerate USB device on port 1 When connecting my usb scanner to the same port: [ 2008.480135] usb 1-1: new high speed USB device using ehci_hcd and address 65 [ 2008.548389] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.736786] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2008.924379] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.112348] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.300443] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.488536] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2009.732180] usb 1-1: new high speed USB device using ehci_hcd and address 71 [ 2014.796299] hub 1-0:1.0: unable to enumerate USB device on port 1 [ 2018.000128] usb 2-1: new full speed USB device using uhci_hcd and address 3 And ubuntu 10.10 recognizes that scanner. So: What can i do to see my usb stick? BTW: on my other Thinkpad running fedora 14 it works perfectly... Cheers -Peter

    Read the article

  • linux refused to mount a valid partition

    - by greg
    My setup is a linux box with 1 partition used thought LVM - has been working for years. I add a freeze and after the reboot the partition cannot be mounted: mount -r -t ext3 /dev/pve/data /mnt/pve-data mount: wrong fs type, bad option, bad superblock on /dev/mapper/pve-data, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so However fsck doesn't see any problem with it: fsck.ext3 -fp /dev/pve/data /dev/pve/data: 3024076/60366848 files (0.6% non-contiguous), 156921642/241435648 blocks There's nothing in dmegs nor the syslog. I'm puzzled, what's wrong with my partition? Thanks in advance greg debian 5.0.10 LVM 2.02.39

    Read the article

  • grub-efi refuses to chainload Windows 8.1

    - by Alexei Averchenko
    I have installed LMDE (with grub in MBR) after I installed Windows 8.1. I then installed the grub-efi package and added the custom Windows options: #!/bin/sh exec tail -n +3 $0 menuentry "Windows" { search --fs-uuid --no-floppy --set=root A89A-7F4C chainloader (${root})/EFI/Boot/bkpbootx64.efi } menuentry "Windows (backup bootloader)" { search --fs-uuid --no-floppy --set=root A89A-7F4C chainloader (${root})/EFI/Microsoft/Boot/bkpbootmgfw.efi } These are basically a leftover from my older Ubuntu setup. However, grub is refusing to load them, complaining about the invalid signature. What do I do now?

    Read the article

  • Running a scheduled task as SYSTEM with console window open

    - by raoulsson
    I am auto creating scheduled tasks with this line within a batch windows script: schtasks /Create /RU SYSTEM /RP SYSTEM /TN startup-task-%%i /TR %SPEEDWAY_DIR%\%TARGET_DIR%%%i\%STARTUPFILE% /SC HOURLY /MO 1 /ST 17:%%i1:00 I wanted to avoid using specific user credentials and thus decided to use SYSTEM. Now, when checking in the taskmanagers process list or, even better, directly with the C:\> schtasks command itself, all is working well, the tasks are running as intended. However in this particular case I would like to have an open console window where I can see the log flying by. I know I could use C:\> tail -f thelogfile.log if I installed e.g. cygwin (on all machines) or some proprietary tools like Baretail on Windows. But since I only switch to these machines in case of trouble, I would prefer to start the scheduled task in such a way that every user immediately sees the log. Any chance? Thanks!

    Read the article

  • How to stop a infinite running process(ztail) started by a ssh session after that session is closed

    - by Sanath Adiga
    I have a peculiar problem. My server supports multiple ssh session simultaneously, so that multiple admins can manage it simultaneously. We have a command which calls ztail to show the compressed log files and when the current ssh session is closed (without pressing ctrlc, to stop the tail command), the command should ideally stop working. But what I observed when I start a new ssh session is that the process ztail is still running in the background and consuming CPU, even though the previous session was closed. How can I determine when the session is closed, so that I can use that variable/flag to close/stop any commands initiated by that previously closed session?

    Read the article

  • Postfix : error: unsupported dictionary type: mysql

    - by flavio.troja
    I've a problem w/ postfix problem: # tail -f /var/log/mail.err Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:57:50 myserver postfix/smtpd[8243]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 17:58:05 myserver postfix/smtpd[8244]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:00:38 myserver postfix/smtpd[8277]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:32 myserver postfix/smtpd[8320]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql Aug 20 18:03:33 myserver postfix/trivial-rewrite[8322]: error: unsupported dictionary type: mysql idea?

    Read the article

  • CentOS listen to everything on the wire

    - by Poni
    I know there's a native command on linux that will output (to stdout) every "event" related to a certain network interface (be it eth0 etc'). Like there's tail -f <file> to listen on file changes.. I just can't find it. I want to see all events, incoming packets, even dropped ones. At lowest level possible. In every protocol (TCP, UDP etc'). I think WireShark is a bit too big for this as I need something very simple just to see the events, it's for testing. What's the command?

    Read the article

  • Retrieving a specific value from “df -h” using shell

    - by diegodias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Why not to use StackTrace to find what method called you

    - by Alex.Davies
    Our obfuscator, SmartAssembly, does some pretty crazy reflection. It's an obfuscator, it's sort of its job to do things in the most awkward way possible. But sometimes, you can go too far. One such time is this little gem from the strings encoding feature: StackTrace stackTrace = new StackTrace(); StackFrame frame = stackTrace.GetFrame(1); Type ownerType = frame.GetMethod().DeclaringType; It's designed to find the type where the calling method is defined. A user found that strings encoding occasionally broke on x64 systems. Very strange. After some debugging (thank god for Reflector Pro, it would be impossible to debug processed assemblies without it) I found that the ownerType I got back was wrong. The reason is that the x64 JIT does tail call optimisation. This saves space on the stack, and speeds things up, by throwing away a method's stack frame if the last thing that it calls is the only thing returned. When this happens, the call to StackTrace faithfully tells you that the calling method is the one that called the one we really wanted. So using StackTrace isn't safe for anything other than debugging, and it will make your code fail in unpredictable ways. Don't use it!

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >