Search Results

Search found 19499 results on 780 pages for 'transaction log'.

Page 253/780 | < Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >

  • SOA 11g Technology Adapters – ECID Propagation

    - by Greg Mally
    Overview Many SOA Suite 11g deployments include the use of the technology adapters for various activities including integration with FTP, database, and files to name a few. Although the integrations with these adapters are easy and feature rich, there can be some challenges from the operations perspective. One of these challenges is how to correlate a logical business transaction across SOA component instances. This correlation is typically accomplished via the execution context ID (ECID), but we lose the ECID correlation when the business transaction spans technologies like FTP, database, and files. A new feature has been introduced in the Oracle adapter JCA framework to allow the propagation of the ECID. This feature is available in the forthcoming SOA Suite 11.1.1.7 (PS6). The basic concept of propagating the ECID is to identify somewhere in the payload of the message where the ECID can be stored. Then two Binding Properties, relating to the location of the ECID in the message, are added to either the Exposed Service (left-hand side of composite) or External Reference (right-hand side of composite). This will give the JCA framework enough information to either extract the ECID from or add the ECID to the message. In the scenario of extracting the ECID from the message, the ECID will be used for the new component instance. Where to Put the ECID When trying to determine where to store the ECID in the message, you basically have two options: Add a new optional element to your message schema. Leverage an existing element that is not used in your schema. The best scenario is that you are able to add the optional element to your message since trying to find an unused element will prove difficult in most situations. The schema will be holding the ECID value which looks something like the following: 11d1def534ea1be0:7ae4cac3:13b4455735c:-8000-00000000000002dc Configuring Composite Services/References Now that you have identified where you want the ECID to be stored in the message, the JCA framework needs to have this information as well. The two pieces of information that the framework needs relates to the message schema: The namespace for the element in the message. The XPath to the element in the message. To better understand this, let's look at an example for the following database table: When an Exposed Service is created via the Database Adapter Wizard in the composite, the following schema is created: For this example, the two Binding Properties we add to the ReadRow service in the composite are: <!-- Properties for the binding to propagate the ECID from the database table --> <property name="jca.ecid.nslist" type="xs:string" many="false">  xmlns:ns1="http://xmlns.oracle.com/pcbpel/adapter/db/top/ReadRow"</property> <property name="jca.ecid.xpath" type="xs:string" many="false">  /ns1:EcidPropagationCollection/ns1:EcidPropagation/ns1:ecid</property> Notice that the property called jca.ecid.nslist contains the targetNamespace defined in the schema and the property called jca.ecid.xpath contains the XPath statement to the element. The XPath statement also contains the appropriate namespace prefix (ns1) which is defined in the jca.ecid.nslist property. When the Database Adapter service reads a row from the database, it will retrieve the ECID value from the payload and remove the element from the payload. When the component instance is created, it will be associated with the retrieved ECID and the payload contains everything except the ECID element/value. The only time the ECID is visible is when it is stored safely in the resource technology like the database, a file, or a queue. Simple Database/File/JMS Example This section contains a simplified example of how the ECID can propagate through a database table, a file, and JMS queue. The composite for the example looks like the following: The flow of this example is as follows: Invoke database insert using the insertwithecidbpelprocess_client_ep Service. The InsertWithECIDBPELProcess adds a row to the database via the Database Adapter. The JCA Framework adds the ECID to the message prior to inserting. The ReadRow Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadRowBPELProcess is created and it is associated with the retried ECID. The ReadRowBPELProcess now writes the record to the file system via the File Adapter. The JCA Framework adds the ECID to the message prior to writing the message to file. The ReadFile Service retrieves the record from the file system and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of ReadFileBPELProcess is created and it is associated with the retried ECID. The ReadFileBPELProcess now enqueues the message via the JMS Adapter. The JCA Framework adds the ECID to the message prior to enqueuing the message. The DequeueMessage Service retrieves the record and the JCA Framework extracts the ECID from the message. The ECID element is removed from the message. An instance of DequeueMessageBPELProcess is created and it is associated with the retried ECID. The logical flow ends. When viewing the Flow Trace in the Enterprise Manger, you will now see all the instances correlated via ECID: Please check back here when SOA Suite 11.1.1.7 is released for this example. With the example you can run it yourself and reinforce what has been shared in this blog via a hands-on experience. One final note: the contents of this blog may be included in the official SOA Suite 11.1.1.7 documentation, but you will still need to come here to get the example.

    Read the article

  • Sub-process /usr/bin/dpkg returned an error code (1)

    - by rohit
    Hey friends i am getting the following error when i am trying to purge shorewall root@aptosid:/etc# apt-get purge shorewall Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: shorewall* 0 upgraded, 0 newly installed, 1 to remove and 3 not upgraded. 1 not fully installed or removed. After this operation, 1,843 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 212702 files and directories currently installed.) Removing shorewall ... : not found/shorewall: 25: /etc/default/shorewall: :q Stopping "Shorewall firewall": not done (check /var/log/shorewall-init.log). invoke-rc.d: initscript shorewall, action "stop" failed. dpkg: error processing shorewall (--purge): subprocess installed pre-removal script returned error exit status 1 configured to not write apport reports Errors were encountered while processing: shorewall E: Sub-process /usr/bin/dpkg returned an error code (1) root@aptosid:/etc# please help me out ...........?

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • Mounting an encrypted partion Error

    - by indiajoe
    Using the disk utilities in ubuntu 11.04, i had encrypted a partition with a passphrase. Each time i used to click on the partition to mount, it used to ask me the passphrase and get mounted. All was fine, until i installed the 12.04. After the installation, this encrypted partition, disappeared from the menu. fdisk -l /dev/sda Shows the encrypted partition in the list /dev/sda7 298953648 488392064 94719208+ 7 HPFS/NTFS/exFAT I tried the following commands to mount it. But they all gave following errors $ sudo cryptsetup luksDump /dev/sda7 Device /dev/sda7 is not a valid LUKS device. $ ecryptfs-unwrap-passphrase /dev/sda7 Passphrase: # i entered the correct passphrase here... Error: Unwrapping passphrase failed [-5] Info: Check the system log for more information from libecryptfs $ grep ecryptfs /var/log/syslog Oct 31 22:43:51 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:28:02 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading Nov 1 01:29:06 benny ecryptfs-unwrap-passphrase: Error attempting to open [/dev/sda7] for reading I don't understand why I am getting the "Device /dev/sda7 is not a valid LUKS device." Could it be due to some corruption in partition table? Is there any way to recover this encrypted partition? Thanks indiajoe

    Read the article

  • Issue with Reporting Services - rsreportserver.config is denied

    - by Gabe
    Error message is 'd:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportServer\RSReportServer.config' is denied.' I open SSRS configuration and I do see the service as started and running. I gave Network Services the necessary permissions in the reportserver folder and config file. SQL Server and SSRS are using LocalSystem as the 'Log On As' in Sql server configuration manager. Most of my sites I googled seem only relevant when it's Network Service running. I changed it to log on as NT Autority\NetworkService but still had the same issue. In SSRS Config Manager, I see that Web Service Identity is not configured (red), though it is showing as readonly domain\aspnet. Windows Server 2003 R2 and SSRS 2005

    Read the article

  • How to find the /dev name of my USB device

    - by mustafa
    I am running Ubuntu 11 on VmWare on Windows XP. I want to format an SD card in Ubuntu. But, I can't figure out which /dev/xxx device the SD card is. I plug the card into the built-in socket of my laptop. I "safely remove" the device in Windows. Then, I "connect" the PCMCIA reader in VmWare. Now, I was supposing to see a new device like /dev/sdx. But, it doesn't appear :( How can I find what the name of my USB device's name and mount it? /var/log/message is empty. Here is the output of dmesg: [ 5268.927308] usb 2-1: new full speed USB device number 12 using uhci_hcd And, here is the last lines of /var/log/syslog: Oct 31 18:51:21 ubuntu kernel: [ 5268.927308] usb 2-1: new full speed USB device number 12 using uhci_hcd Oct 31 18:51:21 ubuntu mtp-probe: checking bus 2, device 12: "/sys/devices/pci0000:00/0000:00:11.0/0000:02:00.0/usb2/2-1" Oct 31 18:51:21 ubuntu mtp-probe: bus: 2, device: 12 was not an MTP device

    Read the article

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • Exim - Sender verify failed - rejected RCPT

    - by Newtonx
    While checking on Exim's log messages I found many entries of the following message "Sender verify failed" "rejected RCPT" ... I 'm not an exim expert... I'm afraid Exim is not delivering 100% emails to recipients, because our Email Marketing Application its getting a lower OPEN RATE. Can someone helpe understand this log messages? Is it my server saying "No Such User Here" or a remote server? 174.111.111.11 represents my server IP. Thanks Exim log 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54514 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54515 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 Warning: Sender rate 672.4 / 1h 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 sender verify fail for <[email protected]>: No Such User Here 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 F=<[email protected]> rejected RCPT <[email protected]>: Sender verify failed 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54515 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54516 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 Warning: Sender rate 673.3 / 1h 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 sender verify fail for <[email protected]>: No Such User Here 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 F=<[email protected]> rejected RCPT <[email protected]>: Sender verify failed 2010-10-02 14:00:19 SMTP connection from myserverdomain.com.br () [174.111.111.11]:54516 I=[174.111.111.11]:25 closed by QUIT 2010-10-02 14:00:19 SMTP connection from [174.111.111.11]:54517 I=[174.111.111.11]:25 (TCP/IP connection count = 2) 2010-10-02 14:00:19 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 Warning: Sender rate 674.3 / 1h 2010-10-02 14:00:20 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 sender verify fail for <Luciene_souza_vasconcellos=hotmail.com--2723--bounce@e-mydomain.com.br>: No Such User Here 2010-10-02 14:00:20 H=myserverdomain.com.br () [174.111.111.11]:54517 I=[174.111.111.11]:25 F=<Luciene_souza_vasconcellos=hotmail.com--2723--bounce@e-mydomain.com.br> rejected RCPT <[email protected]>: Sender verify failed

    Read the article

  • can't run sqldeveloper on Ubuntu

    - by nazar_art
    I tried to install sqldeveloper by following way: Download SQL Developer from Oracle website (I chose Other Platforms download). Extract file to /opt: sudo unzip sqldeveloper-*-no-jre.zip -d /opt/ sudo chmod +x /opt/sqldeveloper/sqldeveloper.sh Linking over an in-path launcher for Oracle SQL Developer: sudo ln -s /opt/sqldeveloper/sqldeveloper.sh /usr/local/bin/sqldeveloper Edit /usr/local/bin/sqldeveloper.sh replace it's content to: #!/bin/bash cd /opt/sqldeveloper/sqldeveloper/bin ./sqldeveloper "$@" Run SQL Developer: sqldeveloper But it shows next output: nazar@lelyak-desktop:/opt/sqldeveloper? ./sqldeveloper.sh Oracle SQL Developer Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved. LOAD TIME : 401# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f3b2dcacbe0, pid=20351, tid=139892273444608 # # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 1.7.0_65-b17) # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode linux-amd64 compressed oops) # Problematic frame: # C 0x00007f3b2dcacbe0 # # Core dump written. Default location: /opt/sqldeveloper/sqldeveloper/bin/core or core.20351 # # An error report file with more information is saved as: # /tmp/hs_err_pid20351.log # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # /opt/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1193: 20351 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}" 134 nazar@lelyak-desktop:/opt/sqldeveloper? java -version java version "1.7.0_65" Java(TM) SE Runtime Environment (build 1.7.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Here is content of /tmp/hs_err_pid20351.log How to solve this trouble?

    Read the article

  • Mapping XFCE4/xRDP sessions to users

    - by garrilla
    I have Ubuntu 13.10 with Xubuntu Desktop - XFCE4. I'm trying to use XDRP to allow MS Windows users to login to the machine with their own user. I've been a lot around the houses with this! I've find two half-way solutions, but can't get them to work as I'd like... 1) in /etc/xrdp/xrdp.ini I set the port to -1 [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=-1 each time any user logs on they get a new session - they can never go back to their original session 2) in /etc/xrdp/xrdp.ini I set the port to 5912 (e.g) [xrdp1] name=sesman-Xvnc lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5912 each time any user logs on they always log on to the same session irrespective of their logon details ??) I found a mid-way solution, to create a lot of sessions by adding adding additional options in the xrdp.ini e.g. [xrdp8] name=Bob's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5913 [xrdp9] name=Jill's Logon lib=libvnc.so username=ask password=ask ip=127.0.0.1 port=5914 and so on, but he problem with this is that Jill can still log into Bob's remote session ??? Is it possible to to do what I'm trying to do? Maybe I have to use different tools?

    Read the article

  • URL redirect to a virtual server on a VLAN

    - by zeroFiG
    I have a production site, running off 10 servers. I've been given another virtual server on the same network as these 10 servers, to use for testing purposes. This server doesn't have it's own DNS entry. Therefore I need to do a redirect to the site hosted on this virtual server for a sub-domain of the site running on the 10 other servers. So Basically I was wondering how I would configure a sub domain of my production server to point at the Virtual server for testing. I'm guessing I need to modify my site file in /etc/apache2/sites-available and add another virtual host like the following and modify the redirect match: <VirtualHost *> ServerName SUBDOMAIN.DOMAIN.com RedirectMatch 301 (.*) **IP ADDRESS** CustomLog /var/log/apache2/SUBDOMAIN.DOMAIN.com.access.log combined </VirtualHost> Do I set the redirect match to just the IP on the Virtual server, and then configure another site file in the sites-available directory, which will recption this redirect and point the browser towards the HTML root? Thanks, I hope I made myself clear.

    Read the article

  • Why are my at jobs running immediately on OS X?

    - by Gabe
    I want to schedule events for exact times in Mac OS X. It seems like the 'at' command is the simplest way to do this. I have enabled atrun using the command: launchctl load -w /System/Library/LaunchDaemons/com.apple.atrun.plist To test at, I'm using the following one-line BASH script: echo 'foo' path/to/log.txt | at now + 2 minutes When I run the script, I get output like: job 17 at Sat May 15 12:57:00 2010 where '12:57:00' is indeed 2 minutes in the future. But the echo command executes immediately: the line 'foo' is added to log.txt right away. How can I make at work for me?

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • DCOGS Balance Breakup Diagnostic in OPM Financials

    - by ChristineS-Oracle
    Purpose of this diagnostic (OPMDCOGSDiag.sql) is to identify the sales orders which constitute the Deferred COGS account balance.This will help to get the detailed transaction information for Sales Order/s Order Management, Account Receivables, Inventory and OPM financials sub ledger at the Organization level.  This script is applicable for various scenarios of Standard Sales Order, Return Orders (RMA) coupled with all the applicable OPM costing methods like Standard, Actual and Lot costing.  OBJECTIVE: The sales order(s) which are at different stages of their life cycle in one spreadsheet at one go. To collect the information of: This will help in: Lesser time for data collection. Faster diagnosis of the issue. Easy collaboration across different modules like  Order Management, Accounts Receivables, Inventory and Cost Management.  You can download the script from Doc ID 1617599.1 DCOGS Balance Breakup (SO/RMA) and Diagnostic Analyzer in OPM Financials.

    Read the article

  • Removing QWERTY Keyboard Layout Permanently

    - by Phoenix
    Following the instructions in this thread, I added the Dvorak layout to the Regional and Language Options control panel, set it as the default keyboard layout and removed the US (QWERTY) layout. However, even though I removed the QWERTY layout, it still appears in my language bar, and my system defaults to it in every new window. This persists after a log-out/log-in and even a system restart. How do I remove the QWERTY keyboard layout from my system permanently? Alternatively (if outright removing QWERTY is just impossible), can I get Windows to default to Dvorak instead of QWERTY for new windows?

    Read the article

  • tracd multiple projects+nginx reverse proxy

    - by Xeross
    I am trying to setup nginx with a reverse proxy to tracd, however I only want to use 1 tracd. Now first here's my config for this domain server { listen 80; server_name bugs.XXXXXXXX.com; access_log /var/log/nginx/XXXXXXXX-bugtracker.access.log proxy; location / { rewrite ^/bugtracker/(.*)$ /$1; rewrite ^/bugtracker$ /; proxy_pass http://127.0.0.1:81/bugtracker/; proxy_redirect default; proxy_set_header Host $host; } location ~ /\.ht { deny all; } } As you can see there's the rewrite rules, because for some reason all the urls that tracd spews out are like /bugtracker/something. Now this is indeed caused by tracd just sending urls like it normally should however trac is at bugs.XXXXXXXX.com/ and not at bugs.XXXXXXXX.com/bugtracker. So how can I make tracd/trac display the (In this case) correct urls ?

    Read the article

  • Why aren't Heroku syslog drains logging to rsyslogd?

    - by Benjamin Oakes
    I'm having a problem using syslog drains as described in https://devcenter.heroku.com/articles/logging. To summarize, I have an Ubuntu 10.04 instance on EC2 that is running rsyslogd. I've also set up the security groups as they describe, and added a syslog drain using a command like heroku drains:add syslog://host1.example.com:514. I can send messages from the Heroku console to my rsyslogd instance via nc. I see them appear in the log file, so I know there isn't a firewall/security group issue.  However, Heroku does not seem to be forwarding log messages to the server that heroku drains lists. I would expect to see HTTP requests, Rails messages, etc. Is there something else I can try to figure this out? I'm new to rsyslogd, so I could easily be missing something.

    Read the article

  • Targus USB-to-RS232 not working with Linux?

    - by Ethan Leroy
    I have the Targus PA088 USB to RS232 converter, but it seems that it does not work with linux. Its RX and TX lights are flashing, but I can't see the data in minicom/picocom. When using it with Windows and hterm, everything's fine. Any idea what could be the problem? Additional info: When I plug in the adapter, I can see the following messages in /var/log/messages.log Nov 25 01:47:31 localhost kernel: [ 831.787066] usb 2-1.1: new full speed USB device number 5 using ehci_hcd Nov 25 01:47:32 localhost kernel: [ 832.554810] mct_u232 2-1.1:1.0: MCT U232 converter detected Nov 25 01:47:32 localhost kernel: [ 832.555002] usb 2-1.1: MCT U232 converter now attached to ttyUSB0 Nov 25 01:47:32 localhost mtp-probe: checking bus 2, device 5: "/sys/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.1" Nov 25 01:47:32 localhost mtp-probe: bus: 2, device: 5 was not an MTP device

    Read the article

  • setting nproc in /etc/security/limit.conf prevents ssh login

    - by omry
    I am trying to use /etc/security/limit.conf on Linux (Debian) to limit the number of processes per user. for starters, I tried to limit my own user processes by adding this to /etc/security/limit.conf: omry hard nproc 100 this locked my user out of ssh. I could open new processes (verified with su omry), but could not log into ssh with that user : sshd reported this in it's log: fatal: setreuid 1000: Resource temporarily unavailable also, I am certain my user is not running anything near 100 processes (actually 6). what can be the reason for this?

    Read the article

  • Issue with percona-xtrabackup-2.0.0 hotbackup on MyIsam tables

    - by arn
    I am trying to implement hot backup for MyIsam tables with "percona-xtrabackup-2.0.0" and getting the following error? As the all tables are MyIsam I doubt am I using the correct package ? Backup : ./innobackupex --user="root" --password=<pass> --defaults-file="<path>/my.cnf" --ibbackup="<path>/percona-xtrabackup-2.0.0/bin/xtrabackup" <path>/backup/ innobackupex: fatal error: no 'mysqld' group in MySQL options innobackupex: fatal error: OR no 'datadir' option in group 'mysqld' in MySQL options apply-log : ./innobackupex-1.5.1 --apply-log --defaults-file=<path>/backup/2012-06-02_09-59-30/backup-my.cnf --ibbackup=<path>/percona-xtrabackup-2.0.0/bin/xtrabackup <path>/backup/2012-06-02_09-59-30/

    Read the article

  • I can't launch any Win8 apps after upgrading to Windows 8.1

    - by locka
    I just upgraded to 8.1 and now none of the Metro apps start. The issue is that if I start any metro app, including the Store and PC Settings they immediately fail. The classic desktop is fine, as are standard programs, it's just the metro apps. If I look in the system event log I see errors like this: *Activation of application winstore_cw5n1h2txyewy!Windows.Store failed with error: This application does not support the contract specified or is not installed. See the Microsoft-Windows-TWinUI/Operational log for additional information.* In addition the tiles in metro have a small cross icon on them: I suspect that my Live ID (which I somehow managed to skip during update) is not set properly and consequently none of the online stuff works. But how do I fix this? I can't start PC settings, I can't start store. I see no way in the classic desktop of setting these things. I don't want to have to reinstall for this. Is there a simple fix?

    Read the article

  • Which approach would lead to an API that is easier to use?

    - by Clem
    I'm writing a JavaScript API and for a particular case, I'm wondering which approach is the sexiest. Let's take an example: writing a VideoPlayer, I add a getCurrentTime method which gives the elapsed time since the start. The first approach simply declares getCurrentTime as follows: getCurrentTime():number where number is the native number type. This approach includes a CURRENT_TIME_CHANGED event so that API users can add callbacks to be aware of time changes. Listening to this event would look like the following: myVideoPlayer.addEventListener(CURRENT_TIME_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().getCurrentTime()); }); The second approach declares getCurrentTime differently: getCurrentTime():CustomNumber where CustomNumber is a custom number object, not the native one. This custom object dispatches a VALUE_CHANGED event when its value changes, so there is no need for the CURRENT_TIME_CHANGED event! Just listen to the returned object for value changes! Listening to this event would look like the following: myVideoPlayer.getCurrentTime().addEventListener(VALUE_CHANGED, function(evt){ console.log ("current time = "+evt.getDispatcher().valueOf()); }); Note that CustomNumber has a valueOf method which returns a native number that lets the returned CustomNumber object being used as a number, so: var result = myVideoPlayer.getCurrentTime()+5; will work! So in the first approach, we listen to an object for a change in its property's value. In the second one we directly listen to the property for a change on its value. There are multiple pros and cons for each approach, I just want to know which one the developers would prefer to use!

    Read the article

  • HP UX can not boot from Ignite Tape

    - by Spirit
    We have hp rp2470 server running hp-ux 11.00, with LVM mirroring. As for redundancy we have second rp2470 same hw (same two processors, same ram, same two hdd’s, same number of lan cards). I want to clone first one to the second. For that purpose I am making ignite tape with the following command: make_tape_recovery -x inc_entire=vg00 Ignite tape finishes without problems. When I boot second server from this ignate tape, server is starting to boot, and ignite restore finishes without any errors, only few notes, which are normal. However vmunix is not booting and when restore finishes, it boot to ISL prompt. From this I cannot boot /stand/vmunix. I tried to run recovery shell but no success. When recovery shell ask to do frecover to restore critical files, then I receive error: frecover(5405): unable to open /dev/rmt/0m At first I thought that the problem might be in the difference of the firmware version of the servers: fw version of production server is: Firmware Version 43.50 and fw version of backup server is: Firmware Version 42.19 So i did a fw upgrade of my backup server so that both servers are v43.50, and tried a recovery but again cant boot the system. Next I did another archive tape with -I (Interactive) flag: make_tape_recovery -I -x inc_entire=vg00 and tried recovery with it, again no good. I cannot find any error or warnings on ignite log, and I cannot boot hpux. I am only on ISL prompt. This is what i've noticed on the gsp logs: ************* SYSTEM ALERT ************** SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:18:49 ALERT LEVEL: 6 = Boot possible, pending failure - action required REASON FOR ALERT SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail LEDs: RUN ATTENTION FAULT REMOTE POWER FLASH OFF ON ON ON LED State: Boot Failed. Running non-OS code. Check Chassis and Console Logs for error messages. 0x00000060860010B0 00000000 00000000 - type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A1231 - type 11 = Timestamp 07/27/2003 10:18:49 And another gsp log: Log Entry # 3 : SYSTEM NAME: mcnfwim1 DATE: 07/27/2003 TIME: 10:12:20 ALERT LEVEL: 6 = Boot possible, pending failure - action required SOURCE: 8 = I/O SOURCE DETAIL: 6 = disk SOURCE ID: 0 PROBLEM DETAIL: 0 = no problem detail CALLER ACTIVITY: 1 = test STATUS: 0 CALLER SUBACTIVITY: 0B = implementation dependent REPORTING ENTITY TYPE: 0 = system firmware REPORTING ENTITY ID: 00 0x00000060860010B0 00000000 00000000 type 0 = Data Field Unused 0x58000860860010B0 00006706 1B0A0C14 type 11 = Timestamp 07/27/2003 10:12:20 Type CR for next entry, - CR for previous entry, Q CR to quit. Please note that I can not change anything on the production server. I can only make changes to the backup server. Any help is appreciated.

    Read the article

  • Cron: job starts but doesn't complete

    - by Guandalino
    I have a problem with a cron job which starts but doesn't complete. Running the command manually works fine. I already read the page about cron issues and solutions here on AskUbuntu, tried the proposed solutions but didn't find an answer working in my case. I'm using Ubuntu 12.04. $ crontab -e SHELL=/bin/bash # otherwise it would be /bin/sh 59 16 * * * /bin/duply calendar backup > /tmp/duply.log Btw, the cron file ends with an empty line, as someone pointed out. Once the job has "finished"...: $ cat /tmp/duply.log Start duply v1.5.7, time is 2012-06-22 16:59:01. Instead, running manually the script it works correctly and gives this output: Start duply v1.5.7, time is 2012-06-22 17:06:39. [cut] ... here is a long output generated by duply. ... and yes, files have been backed up. [cut] --- Finished state OK at 17:06:42.581 - Runtime 00:00:03.170 --- I also tried to restart the cron daemon (sudo service cron restart) but nothing changed. Do you have any suggestion to fix the issue?

    Read the article

  • nagios-nrpe-unable-to-read-output [closed]

    - by Bill S
    Oracle Linux; Icinga; Nagios plugins I did all the easy steps command runs fine standalone through my normal login; looked at /var/log/messages to see if any clues there Trying to run plugin under nrpe login - cant login don't know password; does this password matter? can I reset it? clone id? Any way to have shell being executed log all commands and output to somewhere? Trying to run this shell script plugin "nqcmd OBIEE plugin for Nagios" from this URL: http://www.rittmanmead.com/2012/09/advanced-monitoring-of-obiee-with-nagios/ I went through script and made sure that everything obvious was set to 755 Any help would be appreciated

    Read the article

< Previous Page | 249 250 251 252 253 254 255 256 257 258 259 260  | Next Page >