Search Results

Search found 17625 results on 705 pages for 'techno log'.

Page 203/705 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Undeploying Apps Running JDev 11g WLS

    - by Christian David Straub
    Guest post from Jeanne Waldman:I was running my application in JDeveloper when I noticed log messages in the console for a different application, let's call it OldApp. I stopped and started the my application server, the WLS server, re-ran my application, and still I'd see messages for OldApp. I shut down JDeveloper, restarted, and still when I ran my application, I'd see the OldApp's messages   Well, it turns out that at some point in time the OldApp was not properly undeployed. To really stop OldApp, I had to:   Go to http://127.0.0.1:7101/console.   This deployed the console app where you configure WLS. By default the login credentials are:   username: weblogic password: weblogic1 I clicked on Deployments and I saw that OldApp was still running. I selected the checkbox next to OldApp and clicked on the Stop -> Force Stop Now.  Now when I run my application, I do not see the OldApp log messages.

    Read the article

  • Installazione ATI Mobility Radeon HD 5650 su Ubuntu 11.10

    - by Antonio
    Salve a tutti, possiedo un portatile HP Pavillion dv6 3110 con scheda video dedicata ATI Mobility Radeon HD 5650 da 1 Giga ed ho installato da poco Ubuntu 11.10 Versione 64 bit. Ho seguito molte guide su internet per installare i driver per la mia scheda video ma nessuna ha dato esito positivo. Nella finestra "driver aggiuntivi" sono riuscito ad installare i "Driver grafici fglrx proprietari ATI/AMD" ma dopo il riavvio non riesco ad utilizzare correttamente la scheda video. Mentre i "Driver grafici fglrx proprietari ATI/AMD (aggiornamenti post-release)" non me li fa proprio installare segnalando un errore che riporto di seguito "L'installazione di questo driver non è riuscita.Consultare i file di registro per maggiori informazioni: /var/log/jockey.log". Ho pensato allora di scaricare direttamente dal sito di AMD gli ultimi driver rilasciati attraverso il pacchetto "amd-driver-installer-12-3-x86.x86_64.run", l'ho lanciato, ho seguito il wizard di installazione, l'installazione viene completata, digito "sudo aticonfig --initial" per la configurazione iniziale, ma al riavvio del pc appaiono soltanto scritte su schermo nero con una serie di "OK" e qualche "FAIL". Ho provato questa procedura anche per le versioni precedenti dei driver, ma il risultato è sempre lo stesso. Sono disperato. Riuscirò mai ad utilizzare la mia scheda video? Vi incollo per completezza ciò che mi appare all'esecuzione del comando "lspci -nn | grep VGA" per visualizzare i processori grafici presenti sul mio pc: 00:02.0 VGA compatible controller [0300]: Intel Corporation Core Processor Integrated Graphics Controller [8086:0046] (rev 02) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Madison [AMD Radeon HD 5000M Series] [1002:68c1] Grazie anticipatamente a coloro che potranno aiutarmi. Cordiali Saluti Antonio Giordano

    Read the article

  • Why does bash invocation differ on AIX when using telnet vs ssh

    - by Philbert
    I am using an AIX 5.3 server with a .bashrc file set up to echo "Executing bashrc." When I log in to the server using ssh and run: bash -c ls I get: Executing bashrc . .. etc.... However, when I log in with telnet as the same user and run the same command I get: . .. etc.... Clearly in the telnet case, the .bashrc was not invoked. As near as I can tell this is the correct behaviour given that the shell is non-interactive in both cases (it is invoked with -c). However, the ssh case seems to be invoking the shell as interactive. It does not appear to be invoking the .profile, so it is not creating a login shell. I cannot see anything obviously different between the environments in the two cases. What could be causing the difference in bash behaviour?

    Read the article

  • Why does using nginx as a reverse proxy break local links?

    - by tsvallender
    I've just set up nginx as a reverse proxy, so some sites served from the box are served directly by it and others are forwarded to a Node.js server. The site being served by Node.js, however, is displayed with no CSS or images, so I assume the links are somehow being broken, but don't know why. The following is the only file in /etc/nginx/sites-enabled: server { listen 80; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name dev.my.site; access_log /var/log/nginx/localhost.access.log; location / { root /var/www; index index.html index.htm; } location /myNodeSite { proxy_pass http://127.0.0.1:8080/; proxy_redirect off; proxy_set_header Host $host; } } I had thought perhaps it was trying to find them in /var/www due to the first entry, but removing that doesn't seem to help.

    Read the article

  • Oracle Utilities Application Framework V4.1 Group Fix 4 available

    - by ACShorten
    Oracle Utilities Application Framework V4.1 Group Fix 4 is available from My Oracle Support as Patch 13523301. This Group Fix contains a number of enhancements and keeps fixes up to date to the latest patch level. The enhancements included in this Group Fix include: UI Hints - In previous group fixes of the Oracle Utilities Application Framework the infrastructure to support UI Hints was introduced. This group fix completes the release of this functionality. Prior to this enhancement, products and implementers typically would build at least one UI Map per Business Object to display and/or maintain the object. Whilst, this can be generated using the UI Map maintenance function and stored, this enhancement allows additional tags and elements to be added to the Business Object directly to allow dynamic generation of the UI Map for maintenance and viewing the object. This reduces the need to generate and build a UI Map at all for that object. This will reduce maintenance effort of maintaining the product and implementation by eliminating the need to maintain the HTML for the UI Map. This also allows lower skilled personnel to maintain the system. Help and working examples are available from the View schema attributes and node names option from the Schema Tips dashboard zone. For example: Note: For examples of the hints, refer to an of the following Business Objects F1_OutcomeStyleLookup, F1-TodoSumEmailType, F1-BOStatusReason or F1-BIGeneralMasterConfig. Setting batch log file names -  By default the batch infrastructure supplied with the Oracle Utilities Application Framework sets the name and location of the log files to set values. In Group Fix 4 a set of user exits have been added to allow implementers and partners to set their own filename and location.  Refer to the Release Notes in the download for more details.

    Read the article

  • Canon MX870 printer only shows "Processing" on the status LCD

    - by Nick
    I had my Canon MX870 installed perfectly fine in 11.10, but since upgrading to 12.04, it no longer works. The printer is recognized in print settings and when I attempt to print a test page, the printer LCD displays a "Processing" message, but then it disappears and nothing happens. Here are my logs (note that printing did not succeed despite the access logs showing success): # /var/log/cups/access_log localhost - - [22/May/2012:12:29:35 -0400] "POST /printers/Canon-MX870 HTTP/1.1" 200 412 Print-Job successful-ok - # /var/log/cups/error_log W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-Gray..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateProfile: org.freedesktop.ColorManager.AlreadyExists:profile id 'Canon-MX870-RGB..' already exists W [22/May/2012:12:25:51 -0400] failed to CreateDevice: org.freedesktop.ColorManager.AlreadyExists:device id 'cups-Canon-MX870' already exists

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • Get Pidgin Logs From Other Directory

    - by silent
    I'm using pidgin on both, windows and linux on several pc. To sync my log, I use dropbox. For linux, it's easy. Just a matter of symlink. However, I don't know how to sync it on windows, without manual copy-paste once I'm done on windows. So, is there any solution to my problem? pidgin plugin, maybe? Update As MarkM's answer, I did this to solve my problem: backup and delete current log (it's located at C:\Users\{your user name}\Roaming\.purple\logs) mklink /D "C:\Users\{your user name}\Roaming\.purple\logs" "E:\My Dropbox\somepath\purplelogs" "C:\Users\{your user name}\Roaming\.purple\logs" is where you want your symlink at "E:\My Dropbox\somepath\purplelogs" is there you have your dropboxed logs.

    Read the article

  • Hancon / Hanwang Graphics Tablet not recognised

    - by Martin Kyle
    I'm totally lost. I've just built a new system and installed Ubuntu 12.04. It's my first time with Linux and getting into the terminal / command line for the first time since IBMDOS 5 and Windows 3.1 has been a steep learning curve. However, the interface works beautifully apart from it doesn't recognize my Hanvon Artmaster AM1209. I have sent diagnostics to Digimend and Favux was kind enough to advise that the tablet should be using the Wacom X driver as the Hanvon is actually a Hanwang and these should be supported. lsusb reports: ID 0b57:8501 Beijing HanwangTechnology Co., Ltd xinput list reports: ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? PS/2+USB Mouse id=8 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Power Button id=7 [slave keyboard (3)] ? Eee PC WMI hotkeys id=9 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=10 [slave keyboard (3)] Favux suggested inspecting /var/log/Xorg.0.log for the tablet but I cannot see any mention of it, and that is as far as I have got. I've tried researching the problem but I am struggling with all the new terminology and the fact that I want the PC to be a means to an end and not the end in itself where I spend the rest of my days tweaking and testing rather than just using it. Hope there is some help out there.

    Read the article

  • Yum install in chroot directory

    - by pulegium
    I'm trying to install the Base group on a mounted volume. Here's the custom yum.conf that I'm using: [main] cachedir=/var/cache/yum/ debuglevel=2 logfile=/var/log/yum.log exclude=*-debuginfo obsoletes=1 gpgcheck=0 reposdir=/dev/null [base] name=Fedora 13 - i386 baseurl=file:///media/Fedora\ 13\ i386\ DVD/ enabled=1 [updates] name=Fedora 13 - i386 - Updates baseurl=http://mirror.sov.uk.goscomb.net/fedora/linux/updates/13/i386/ enabled=1 When I run # yum -c yum.conf --installroot=/mnt groupinstall Base I would expect yum to install everything under /mnt But it keeps on saying: [...] Package irda-utils-0.9.18-10.fc12.i686 already installed and latest version Package time-1.7-37.fc12.i686 already installed and latest version Package man-pages-3.23-6.fc13.noarch already installed and latest version Package talk-0.17-33.2.4.i686 already installed and latest version Package pam_passwdqc-1.0.5-6.fc13.i686 already installed and latest version [...] I tried rpm --base=/mnt --initdb and then use rpm to install fedora-release (which worked and installed the package under /mnt) But yum keeps on saying that all packages are installed. Any ideas?...

    Read the article

  • Hardening network with sysctl settings made Wi-fi downloading speed extremely slow

    - by Rohit Bansal
    I just followed up following steps to harden network security The /etc/sysctl.conf file contain all the sysctl settings. Prevent source routing of incoming packets and log malformed IP's enter the following in a terminal window: sudo vi /etc/sysctl.conf Edit the `/etc/sysctl.conf` file and un-comment or add the following lines : # IP Spoofing protection net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.default.rp_filter = 1 # Ignore ICMP broadcast requests net.ipv4.icmp_echo_ignore_broadcasts = 1 # Disable source packet routing net.ipv4.conf.all.accept_source_route = 0 net.ipv6.conf.all.accept_source_route = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv6.conf.default.accept_source_route = 0 # Ignore send redirects net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 # Block SYN attacks net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 2048 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 5 # Log Martians net.ipv4.conf.all.log_martians = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 # Ignore ICMP redirects net.ipv4.conf.all.accept_redirects = 0 net.ipv6.conf.all.accept_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv6.conf.default.accept_redirects = 0 # Ignore Directed pings net.ipv4.icmp_echo_ignore_all = 1 To reload sysctl with the latest changes, enter: sudo sysctl -p But, after applying the changes I found "Wi-fi" downloading speed and terminal downloading speed extremely slow (less than 1KB/s) however surfing speed through browser was good. But, using direct ethernet cable was giving a good speed. Then, I reverted back the above changes and things fall back in line once again.... Could you please let me know what possibly in above script is affecting such behaviour [and why] ? How could I still maintain hardening of network security without disturbing Wi-fi downloading speed ?

    Read the article

  • Snmpd update interface counters slowly or something like this

    - by Korjavin Ivan
    I update one my freebsd box to 9-stable (totally new installation) and install net-snmp for monitoring. uname -r 9.1-PRERELEASE pkg_info net-snmp-5.7.1_7 Information for net-snmp-5.7.1_7: Comment: An extendable SNMP implementation .... cat /var/db/ports/net-snmp/options # This file is auto-generated by 'make config'. # Options for net-snmp-5.7.1_7 _OPTIONS_READ=net-snmp-5.7.1_7 _FILE_COMPLETE_OPTIONS_LIST= IPV6 MFD_REWRITES PERL PERL_EMBEDDED PYTHON DUMMY TKMIB DMALLOC MYSQL AX_SOCKONLY UNPRIVILEGED OPTIONS_FILE_UNSET+=IPV6 OPTIONS_FILE_UNSET+=MFD_REWRITES OPTIONS_FILE_SET+=PERL OPTIONS_FILE_SET+=PERL_EMBEDDED OPTIONS_FILE_UNSET+=PYTHON OPTIONS_FILE_SET+=DUMMY OPTIONS_FILE_UNSET+=TKMIB OPTIONS_FILE_SET+=DMALLOC OPTIONS_FILE_UNSET+=MYSQL OPTIONS_FILE_UNSET+=AX_SOCKONLY OPTIONS_FILE_UNSET+=UNPRIVILEGED I have about 500 vlan on this machine, and collect info about interface through snmpd to 2 different software, zabbix and cacti. And both of them plot the graphs with blank fields. I tryed change polling time in zabbix, from 15, sec to 30,60,90,120,10. And anyway i have blank fields. snmpd.conf is empty - only a access controls. This configuration worked fine on freebsd 8. Where is my fault? How fix this graphs? UPD: Changing pooling time, switch off one of agent, doesnt help. I look at zabbix log (recieved data from snmpd) and see that: sorry for russian locale, just look at numbers: and thats is not true, as my "iftop" show speed was about 90Mbits, but snmpd return 2Mbits. I understand that snmpd doesnt return speed, it return just a counter. But how its possible? why 2Mbit/s ? I tryed recompile snmpd with 64-bit counters, and without it. In both variants this blank fields present. So i think its my OS (freebsd) doesnt update interface counters well. I still collect tcpdump for found this request/response. But have problem with that, to much trash. UPD2: I decrypt tcpdump-ed file, and public this as google doc at gdocfile Timediff looks strange.. Like zabbix sometimes "forget" do request, and then do twice at row, ehh UPD3: I parse log from command "while true; do netstat -bin -I vlan4008 /var/log/netstat; sleep 300; done" and load as google docs, and add formula for speed : link Looks like all counters in OS are good. Now i think problem in : 1. zabbix get request twice at row (and what about cacti) 2. snmpd use counter32

    Read the article

  • Lots of 408 Request Timed Out from same IPs

    - by GreatFire
    Web server: Nginx. Checking our log files, there are many log entries of connections that: take 59-61 seconds send an empty request (or at least none is logged) result in a 408 response (request timed out) do not contain any http_user_agent originate from a limited number of IPs We are monitoring average times to serve responses and this obviously inflates our statistics. Apart from that though, is this a problem? Any idea why it is occurring? Does it suggest that somebody is intentionally messing with us? What should we do?

    Read the article

  • Windows XP stay logged in

    - by VEC
    Is there a way to make Windows XP stay logged in even after the user logs off? Right now the PCs log in at start up and we're using WinOFF to shut down the computer after X minutes of inactivity. The problem is that WinOFF does not work when the user logs off and stays in the "Select user login" screen. I'm thinking a possible solution would be to make the computer log back in as the default user after Y minutes of inactivity. How can I make it so that Windows XP logs in automatically after the user logs off?

    Read the article

  • Confirm disk is broken when it passes all diagnostics

    - by Halfgaar
    I have a system with a potentially broken disk, but the disk passes all manner of diagnostics. I have been unable to confirm that the disk is broken. What are my options? I could just replace the disk, but because this situation is very similar to another more severe situation I have (long story), I'd like to actually make a proper diagnosis as opposed to randomly binning hardware. The issue and history is this: I had a Debian Linux PC (500 MHz P3) acting as router, nagios and munin. It crashed every couple of weeks. No logs or dmesg could be obtained (because it's an old Compaq that only boots when you configure it as keyboardless, making connecting a keyboard later, once it's booted, impossible). At the time, I just replaced the computer with another Compaq (P4 2.4 GHz) because I thought the hardware was faulty. However, it still crashed every couple of weeks. the difference is that on this computer, I can still SSH into it. It gives all kinds of errors on hda. I'd like to confirm that the disk is broken, but nothing I do confirms this: SMART error logs shows no errors. Normally when a disk starts acting up, SMART my pass, but it still records a read-error in the error log. SMART self-test (smartctl -t long /dev/sda) completes without errors. re-allocated sector count (a tell-tale parameter) has been 31 all its life, even when the disk was still in use in my desktop PC years ago, and it still is. The figure never changed. dd if=/dev/sda of=/dev/null bs=4096 passes with flying colors. What else can I do to assess the health of the drive? Again, this is not about making this router fully functional again, this is a disk forensic question, because it just so happens that I have another server that potentially has the same problem, and knowing the answer to this will possibly help me greatly. For the record, below are logs and such. This is the smartctl -a output: smartctl 5.40 2010-07-12 r3124 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.7 and 7200.7 Plus family Device Model: ST3120026A Serial Number: 5JT1CLQM Firmware Version: 3.06 User Capacity: 120,034,123,776 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 6 ATA Standard is: ATA/ATAPI-6 T13 1410D revision 2 Local Time is: Mon Jul 1 21:18:33 2013 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 24) The self-test routine was aborted by the host. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. No General Purpose Logging support. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 85) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 050 046 006 Pre-fail Always - 47766662 3 Spin_Up_Time 0x0003 097 096 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 10 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 31 7 Seek_Error_Rate 0x000f 084 060 030 Pre-fail Always - 820305 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46373 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 605 194 Temperature_Celsius 0x0022 036 065 000 Old_age Always - 36 195 Hardware_ECC_Recovered 0x001a 050 046 000 Old_age Always - 47766662 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 196 000 Old_age Always - 6 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 Data_Address_Mark_Errs 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Aborted by host 80% 46361 - # 2 Extended offline Completed without error 00% 46358 - # 3 Short offline Completed without error 00% 12046 - # 4 Extended offline Completed without error 00% 10472 - # 5 Short offline Completed without error 00% 10471 - # 6 Short offline Completed without error 00% 10471 - # 7 Short offline Completed without error 00% 6770 - # 8 Extended offline Aborted by host 90% 5958 - # 9 Extended offline Aborted by host 90% 5951 - #10 Short offline Completed without error 00% 5024 - #11 Extended offline Aborted by host 80% 5024 - #12 Short offline Completed without error 00% 3697 - #13 Short offline Completed without error 00% 237 - #14 Short offline Completed without error 00% 145 - #15 Short offline Completed without error 00% 69 - #16 Extended offline Completed without error 00% 68 - #17 Short offline Completed without error 00% 66 - #18 Short offline Completed without error 00% 49 - #19 Short offline Completed without error 00% 29 - #20 Short offline Completed without error 00% 29 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. And this is the dmesg error when it has crashed (which repeats for a bunch of different sectors): [1755091.211136] sd 0:0:0:0: [sda] Unhandled error code [1755091.211144] sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [1755091.211151] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 08 fe ad 38 00 00 08 00 [1755091.211166] end_request: I/O error, dev sda, sector 150908216

    Read the article

  • Misunderstanding Scope in JavaScript?

    - by Jeff
    I've seen a few other developers talk about binding scope in JavaScript but it has always seemed to me like this is an inaccurate phrase. The Function.prototype.call and Function.prototype.apply don't pass scope around between two methods; they change the caller of the function - two very different things. For example: function outer() { var item = { foo: 'foo' }; var bar = 'bar'; inner.apply(item, null); } function inner() { console.log(this.foo); //foo console.log(bar); //ReferenceError: bar is not defined } If the scope of outer was really passed into inner, I would expect that inner would be able to access bar, but it can't. bar was in scope in outer and it is out of scope in inner. Hence, the scope wasn't passed. Even the Mozilla docs don't mention anything about passing scope: Calls a function with a given this value and arguments provided as an array. Am I misunderstanding scope or specifically scope as it applies to JavaScript? Or is it these other developers that are misunderstanding it?

    Read the article

  • How can I get a scheduled task to run for a user regardless of which computer the user is logged onto?

    - by Ernst
    I've got a scheduled task that needs to run for a user at a specific time. However, the user sometimes logs onto one machine, the next day onto another, then next week onto yet another. At some pint during the day, the user might have to log onto another machine. How do I get the scheduled task to run regardless of which computer the user is using? I could of course create the task on all computers, but that seems a bit overkill. Running a script on log on (or a group policy) to create the task doesn't seem a good method either. Any ideas? Basically I want the scheduled task to be defined on the user instead of on the computer. If in the end I need to choose between the two options above, which is best?

    Read the article

  • Sporadic '.Xauthority not writable, changes will be ignored' going from OSX -> Linux

    - by Kamil Kisiel
    Every now and then when users SSH from their OS X (Snow Leopard) workstation to one of our Linux hosts they receive the message: /usr/bin/xauth: ~/.Xauthority not writable, changes will be ignored Of course, their X forwarded applications will not work at this point. However, if they log out and log right back in again they do not get the message and everything works as expected. On their Mac they get their home directory via AFP. The Linux machines get it via NFS. Any ideas on what could be going on here?

    Read the article

  • MySQL and User level logging

    - by Adraen
    I have been looking at logging only certain users activity in MySQL. I found that the logging could be enabled or disabled for all users but one of the service using the db does a lot of queries and therefore I would like to only log specific users. Google told me that a flag can be SET to enable disable logging, however, I cannot modify the service DB connection code and asking every single user to enable logging before they do anything might not be as reliable as I want. So, do you know if there is any way to log only a set of users queries ? Thanks !

    Read the article

  • Remote servlet by mod_jk ?

    - by marioosh.net
    I have remote servlet for example: h*tps://[ip_address]/servlet (h*tps://[ip_address]/ - Tomcat main page) that i need to configure on local Apache HTTPd server. My mod_jk configuration looks like below, but doesn't work. Something works, because when i type h*tps://localhost/console in a browser i get Tomcat error page "HTTP Status 404 - /console/". JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkMount /console/* ajp13 workers.properties: worker.ajp13.type=ajp13 worker.ajp13.host=[ip_address] worker.ajp13.port=8009 Remote Tomcat is configured good i think - listen on port 8009 and servlet h*tps://[ip_address]/servlet works too. <Connector port="8009" protocol="AJP/1.3" redirectPort="443" /> Anybody helps ?

    Read the article

  • Ubuntu reboot suddenly

    - by Gladiator
    Its the second day I have this issue, and Ubuntu still reboot suddenly. nothing significatif in syslog. salim@SalimPC:~$ tail -f /var/log/syslog<br> Nov 7 12:34:53 SalimPC dbus[873]: [system] Successfully activated service 'com.ubuntu.SystemService' SalimPC dbus[873]: [system] Activating service name='org.freedesktop.PackageKit' (using servicehelper) SalimPC AptDaemon: INFO: Initializing daemon SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer SalimPC dbus[873]: [system] Successfully activated service 'org.freedesktop.PackageKit' SalimPC AptDaemon.PackageKit: INFO: Initializing PackageKit transaction SalimPC AptDaemon.Worker: INFO: Simulating trans:/org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4<br> SalimPC AptDaemon.Worker: INFO: Processing transaction org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 SalimPC AptDaemon.PackageKit: INFO: Get updates() Nov 7 12:34:58 SalimPC AptDaemon.Worker: INFO: Finished transaction /org/debian/apt/transaction/6933b4b977d944fa8714898c01bfeae4 ---------------------------------Previous post------------------ Hi My ubuntu has rebooted suddenly (2 time till now in one hour). After login, a crash was indicated in /usr/sbin/ntop. below are the syslog and a screenshot of the crash. salim@SalimPC:~$ tail /var/log/syslog Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (9642->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (17850->8232) Nov 6 18:25:38 SalimPC ntop[1630]: **WARNING** packet truncated (8274->8232) Nov 6 18:25:39 ntop[1630]: last message repeated 2 times Nov 6 18:25:39 SalimPC ntop[1630]: **WARNING** packet truncated (16482->8232) Nov 6 18:25:40 SalimPC ntop[1630]: **WARNING** packet truncated (11010->8232) Nov 6 18:25:43 SalimPC ntop[3075]: THREADMGMT[t3063068672]: ntop RUNSTATE: PREINIT(1) Nov 6 18:25:43

    Read the article

  • SSH closing by itself - root works fine

    - by Antti
    I'm trying to connect to a server but if i use any other user than root the connection closes itself after a successful login: XXXXXXX:~ user$ ssh -v [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to XXXXXXX.XXXXXX.XXX [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /Users/user/.ssh/identity type -1 debug1: identity file /Users/user/.ssh/id_rsa type -1 debug1: identity file /Users/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'XXXXXXX.XXXXXX.XXX' is known and matches the RSA host key. debug1: Found key in /Users/user/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /Users/user/.ssh/woo_openssh debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Offering public key: /Users/user/.ssh/sidlee.dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /Users/user/.ssh/identity debug1: Trying private key: /Users/user/.ssh/id_rsa debug1: Offering public key: /Users/user/.ssh/id_dsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. Last login: Mon Mar 29 01:41:51 2010 from 193.67.179.2 debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: channel 0: free: client-session, nchannels 1 Connection to XXXXXXX.XXXXXX.XXX closed. Transferred: sent 2976, received 2136 bytes, in 0.5 seconds Bytes per second: sent 5892.2, received 4229.1 debug1: Exit status 1 If i log in as root the exact same way it works as expected. I've added the users i want to log in with to a group (sshusers) and added that group to /etc/sshd_config: AllowGroups sshusers I'm not sure what to try next as i don't get a clear error anywhere. I would like to enable specific accounts to log in so that i can disable root. This is a GridServer/Media Temple (CentOS).

    Read the article

  • Error building main Guest Additions Module while installing VirtualBox guest additions

    - by Praveen Sripati
    I have installed Ubuntu 12.10 Guest on Ubuntu 12.04 Host using VirtualBox. Everything is from repository and no direct install. When I install the guest additions, the below error is shown in the console. Before running the command I mapped the VBoxGuestAdditions.iso in the Guest. The closest I could get is this article which says to install the latest version of VirtualBox (not the one from the repository). Is there any alternate solution? sudo ./VBoxLinuxAdditions.run Verifying archive integrity... All good. Uncompressing VirtualBox 4.1.12 Guest Additions for Linux......... VirtualBox Guest Additions installer Removing installed version 4.1.12 of VirtualBox Guest Additions... Removing existing VirtualBox DKMS kernel modules ...done. Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done.

    Read the article

  • Why "object reference not set to an instance of an object" doesn't tell us which object?

    - by Saeed Neamati
    We're launching a system, and we sometimes get the famous exception NullReferenceException with the message Object reference not set to an instance of an object. However, in a method where we have almost 20 objects, having a log which says an object is null, is really of no use at all. It's like telling you, when you are the security agent of a seminar, that a man among 100 attendees is a terrorist. That's really of no use to you at all. You should get more information, if you want to detect which man is the threatening man. Likewise, if we want to remove the bug, we do need to know which object is null. Now, something has obsessed my mind for several months, and that is: Why .NET doesn't give us the name, or at least the type of the object reference, which is null?. Can't it understand the type from reflection or any other source? Also, what are the best practices to understand which object is null? Should we always test nullability of objects in these contexts manually and log the result? Is there a better way?

    Read the article

  • What ssh command would I use to set up "backwards listening"

    - by Nathan
    Machine A is behind a firewall. I have physical access to it, but I want to log into it remotely, and I do not have access to the firewall settings. Machine B is remote, and not behind any firewall. (It's my linode) Machine C is the mobile device I'm going to attempt to ssh into A from. Is there an ssh command that I can run from machine A that connects to machine B and stays open, that will allow me to log into A from C, via B? From the manual I'd guess it would be to run the follwing on A ssh -R *:9999:localhost:22 me@B and then run this on C ssh me@B -p 9999 but the previous command reports "Connection refused."

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >