Search Results

Search found 43406 results on 1737 pages for 'drag and drop files'.

Page 1660/1737 | < Previous Page | 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667  | Next Page >

  • Issues with Server 2012 using DFSR running on Hyper-V 2012

    - by Bryan
    We have a number of Server 2012 systems, all of which run virtualised on Hyper-V 2012 server. We are having problems with two such virtual instances, both of which are used as file servers, whereby they occasionally stop responding to requests to serve files to clients. After logging on to the server, attempts to shut it down gracefully fail (no error, it just fails to acknowledge a shutdown request). Recovery is a case of power cycling the server(s) from the Hyper-V console. These two servers don't server a large number of users (one serves no more than 6 users, and the other serves around 20 users), they are in the same domain, but on different physical hardware (and at different sites). They don't lock up at the same time. They both use DFSR to replicate a fairly large amount of data between themselves (200GB) over ADSL connections, this is working fine, and we have been using DFSR to do this on the previous two generations of server OS we have used (Server 2008 R2 and Server 2003 - both of which were physical installs however). Today, when one of the servers crashed, I noticed an entry in the event log, which looked similar to the following: Log Name: Application Source: ESENT Date: 27/11/2012 10:25:55 Event ID: 533 Task Category: General Level: Warning Keywords: Classic User: N/A Computer: HAL-FS-01.example.com Description: DFSRs (1500) \\.\E:\System Volume Information\DFSR\database_C8CC_101_CC00_EC0E\ dfsr.db: A request to write to the file "\\.\E:\System Volume Information\ DFSR\database_C8CC_101_CC00_EC0E\fsr.log" at offset 4423680 (0x0000000000438000) for 4096 (0x00001000) bytes has not completed for 36 second(s). This problem is likely due to faulty hardware. Please contact your hardware vendor for further assistance diagnosing the problem. When the server started up again, I went to find the event log entry to investigate further and found that the event log entry was no longer there (I assume it was in memory but failed to write to disk before the server was powered off, for the reason mentioned in the message). I found the above message by searching back further in the event log. Both of these virtual servers have their E: volumes fully allocated as opposed to dynamically expanding, and there are no other issues on any of the other virtual servers (which include server 2012, server 2008 R2 and Ubuntu 12.04 x64). There are no signs of IO, memory or CPU starvation on the host systems. I've used performance counters on the affected virtual servers to monitor memory usage (including non paged pool usage), as well as CPU and network utilisation, and none of these show any signs of trouble when the issue arises. I would have thought our configuration isn't that uncommon, so I'm wondering if anyone else has seen this, and managed to resolve the problem?

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Supervisord appears to be running, but monitored programs aren't launched

    - by Brad Montgomery
    I've got supervisord 3.0a8 installed from the system package on ubuntu 10.04 (64bit). The supervisor service appears to be running, but it's not launching the configured programs. Interestingly enough, this exact configuration is running on another system, and is working as expected. The main config file looks like this: ; /etc/supervisor/supervisord.conf [unix_http_server] chmod=0700 file=/var/run/supervisor.sock [supervisord] logfile=/var/log/supervisor/supervisord.log childlogdir=/var/log/supervisor pidfile=/var/run/supervisord.pid [rpcinterface:supervisor] supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface [supervisorctl] serverurl=unix:///var/run/supervisor.sock [include] files = /etc/supervisor/conf.d/*.conf A sample program config looks like this: ; /etc/supervisor/conf.d/sample.conf [program:sample] directory=/opt/sample command=/opt/sample/run.sh Where, the /opt/sample/run.sh is: #!/bin/bash while true; do T=`date` echo "[$T] Running!" >> /var/log/sample.log sleep 1 done And, here's some additional information regarding the running instance of supervisord: root@myhost:~# supervisorctl version 3.0a8 root@myhost:~# which supervisorctl /usr/bin/supervisorctl root@myhost:~# which supervisord /usr/bin/supervisord root@myhost:~# supervisorctl status # NOTE that there's no output! root@myhost:~# supervisorctl avail root@myhost:~# service supervisor status is running root@myhost:~# ps aux | grep supervisor root 21740 0.1 0.4 40772 10056 ? Ss 11:28 0:00 /usr/bin/python /usr/bin/supervisord root 21749 0.0 0.0 7624 932 pts/2 S+ 11:28 0:00 grep --color=auto supervisor root@myhost:~# cat /var/log/supervisor/supervisord.log 2012-04-26 11:28:22,483 CRIT Supervisor running as root (no user in config file) 2012-04-26 11:28:22,536 INFO RPC interface 'supervisor' initialized 2012-04-26 11:28:22,536 WARN cElementTree not installed, using slower XML parser for XML-RPC 2012-04-26 11:28:22,536 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-04-26 11:28:22,539 INFO daemonizing the supervisord process 2012-04-26 11:28:22,539 INFO supervisord started with pid 21740 root@myhost:~# ll /etc/supervisor/conf.d/ total 28 drwxr-xr-x 2 root root 4096 2012-04-26 11:31 ./ drwxr-xr-x 3 root root 4096 2012-04-25 18:38 ../ -rw-r--r-- 1 root root 66 2012-04-26 11:31 sample.conf root@myhost:~# ll /opt/sample/ total 12 drwxr-xr-x 2 root root 4096 2012-04-26 11:32 ./ drwxr-xr-x 4 root root 4096 2012-04-26 11:31 ../ -rwxr-xr-x 1 root root 97 2012-04-26 11:32 run.sh* root@myhost:~# python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> Any help is greatly appreciated!

    Read the article

  • can not connect through SCP, but SSH connections works

    - by Joe Cabezas
    i am trying to connect to my server to transfer file using scp: $ scp -v -r -P <port> <user>@<host>:~/dir/ dir/ this is the output: OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/joe/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to <host> [<host>] port <port>. debug1: Connection established. debug1: identity file /Users/joe/.ssh/identity type -1 debug1: identity file /Users/joe/.ssh/id_rsa type -1 debug1: identity file /Users/joe/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host but connecting via SSH works fine: $ ssh <user>@<host> -p <port> <user>@<host>'s password: <user>@<host>:~$ OK what can be wrong with this? my /etc/ssh/sshd_config file on the host is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port <port> # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication no #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes

    Read the article

  • Effect of HOME on libreoffice to convert to pdf as non-root user

    - by user1032531
    I installed libreoffice-headless and can convert documents when logged on as root. I then tried doing so as another user, and it didn't show an error, but didn't convert the file. I then found that if I get rid of the HOME=/tmp/ayb, it works with the other user. Doesn't HOME=/tmp/ayb just allow files to default to this directory if not specified? (Sorry, I tried to search "Linux HOME", but as you probably expect, received a bunch of non-relevant results). If not, what is the purpose of specifying HOME? Why does setting HOME prevent it from converting on non-root users? Note that /tmp and /tmp/ayb or both 0777. Thank you [root@desktop ~]# yum install libreoffice-headless [root@desktop ~]# yum install libreoffice-writer [root@desktop ~]# ls -l total 48 -rwxrwxrwx. 1 NotionCommotion NotionCommotion 48128 Jul 30 02:38 document_34.doc [root@desktop ~]# HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export [root@desktop ~]# rm d*.pdf rm: remove regular file `document_34.pdf'? y [root@desktop ~]# su NotionCommotion sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ exit exit [root@desktop ~]# su NotionCommotion sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc convert /tmp/ayb/document_34.doc -> /tmp/ayb/document_34.pdf using writer_pdf_Export sh-4.1$ rm d*.pdf sh-4.1$ HOME=/tmp/ayb; /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$ /usr/bin/libreoffice --headless -convert-to pdf --outdir /tmp/ayb /tmp/ayb/document_34.doc sh-4.1$ rm d*.pdf rm: cannot remove `d*.pdf': No such file or directory sh-4.1$

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • ORA-4030 Troubleshooting

    - by [email protected]
    QUICKLINK: Note 399497.1 FAQ ORA-4030 Note 1088087.1 : ORA-4030 Diagnostic Tools [Video]   Have you observed an ORA-0430 error reported in your alert log? ORA-4030 errors are raised when memory or resources are requested from the Operating System and the Operating System is unable to provide the memory or resources.   The arguments included with the ORA-4030 are often important to narrowing down the problem. For more specifics on the ORA-4030 error and scenarios that lead to this problem, see Note 399497.1 FAQ ORA-4030.   Looking for the best way to diagnose? There are several available diagnostic tools (error tracing, 11g Diagnosibility, OCM, Process Memory Guides, RDA, OSW, diagnostic scripts) that collectively can prove powerful for identifying the cause of the ORA-4030.    Error Tracing   The ORA-4030 error usually occurs on the client workstation and for this reason, a trace file and alert log entry may not have been generated on the server side.  It may be necessary to add additional tracing events to get initial diagnostics on the problem. To setup tracing to trap the ORA-4030, on the server use the following in SQLPlus: alter system set events '4030 trace name heapdump level 536870917;name errorstack level 3';Once the error reoccurs with the event set, you can turn off  tracing using the following command in SQLPlus:alter system set events '4030 trace name context off; name context off';NOTE:   See more diagnostics information to collect in Note 399497.1  11g DiagnosibilityStarting with Oracle Database 11g Release 1, the Diagnosability infrastructure was introduced which places traces and core files into a location controlled by the DIAGNOSTIC_DEST initialization parameter when an incident, such as an ORA-4030 occurs.  For earlier versions, the trace file will be written to either USER_DUMP_DEST (if the error was caught in a user process) or BACKGROUND_DUMP_DEST (if the error was caught in a background process like PMON or SMON). The trace file may contain vital information about what led to the error condition.    Note 443529.1 11g Quick Steps to Package and Send Critical Error Diagnostic Informationto Support[Video]  Oracle Configuration Manager (OCM) Oracle Configuration Manager (OCM) works with My Oracle Support to enable proactive support capability that helps you organize, collect and manage your Oracle configurations. Oracle Configuration Manager Quick Start Guide Note 548815.1: My Oracle Support Configuration Management FAQ Note 250434.1: BULLETIN: Learn More About My Oracle Support Configuration Manager    General Process Memory Guides   An ORA-4030 indicates a limit has been reached with respect to the Oracle process private memory allocation.    Each Operating System will handle memory allocations with Oracle slightly differently. Solaris     Note 163763.1Linux       Note 341782.1IBM AIX   Notes 166491.1 and 123754.1HP           Note 166490.1Windows Note 225349.1, Note 373602.1, Note 231159.1, Note 269495.1, Note 762031.1Generic    Note 169706.1   RDAThe RDA report will show more detailed information about the database and Server Configuration. Note 414966.1 RDA Documentation Index Download RDA -- refer to Note 314422.1 Remote Diagnostic Agent (RDA) 4 - Getting Started OS Watcher (OSW)This tool is designed to gather Operating System side statistics to compare with the findings from the database.  This is a key tool in cases where memory usage is higher than expected on the server while not experiencing ORA-4030 errors currently. Reference more details on setup and usage in Note 301137.1 OS Watcher User Guide Diagnostic Scripts   Refer to Note 1088087.1 : ORA-4030 Diagnostic Tools [Video] Common Causes/Solutions The ORA-4030 can occur for a variety of reasons.  Some common causes are:   * OS Memory limit reached such as physical memory and/or swap/virtual paging.   For instance, IBM AIX can experience ORA-4030 issues related to swap scenarios.  See Note 740603.1 10.2.0.4 not using large pages on AIX for more on that problem. Also reference Note 188149.1 for pointers on 10g and stack size issues.* OS limits reached (kernel or user shell limits) that limit overall, user level or process level memory * OS limit on PGA memory size due to SGA attach address           Reference: Note 1028623.6 SOLARIS How to Relocate the SGA* Oracle internal limit on functionality like PL/SQL varrays or bulk collections. ORA-4030 errors will include arguments like "pl/sql vc2" "pmucalm coll" "pmuccst: adt/re".  See Coding Pointers for pointers on application design to get around these issues* Application design causing limits to be reached* Bug - space leaks, heap leaks   ***For reference to the content in this blog, refer to Note.1088267.1 Master Note for Diagnosing ORA-4030

    Read the article

  • Moving windows-2003 hdd into virtual machine - with HDD shrink

    - by jm666
    Before you vote to close as exact duplicate, please read the full question. I was already read: Can I make a virtual machine out of a Windows XP physical machine? Disk2vhd,convert my PC to Hyper-V Virtual Machine Creating a Windows Virtual PC image from a Physical machine physical machine to virtual machine and place into VirtualBox BSOD trying to migrate Windows XP from a physical to a virtual machine http://en.wikipedia.org/wiki/Physical-to-Virtual and all other similiar questions here and several external sites too Unfortunately, don't find answer for my problem. I have an physical machine with 500GB HDD, on what is installed old Windows-2003 server with one server application. The application is like the windows itself, too old, no support for it today, haven't installation media and so on.. ;( On the HDD it is used only approx. 100MB (maybe less when will delete all unnecessary files). Want convert the the machine into the VirtualBox, and the VirtualBox should run on the same machine. Is possible to do this with the next steps? I can attach another HDD (via USB or internally) Boot an live Linux from CD, mount HDDs Run "something" on the Linux (the above wikipedia article have many pointer for the SW) for the conversion and store the image on the USB HDD - unfortunately, many of tools uses some specialty what exists in Windows-XP and above. No informations about Windows-2003 server, so what is an working solution for Windows-2003? try boot the virtual image with VirtualBox when it will run ok, remove the old installation, install Linux on the old 500GB hdd, copy the image and run.. The above should works (i hope), but the problems: i currently have only 320GB external USB hdd. (ofc, i can remove it from a box and enter it as internal HDD too) so, for the conversion I looking for the on the fly HDD shrink, so while moving the physical 500GB HDD need shrink it into smaller HDD - as i told above, only 100MB is used Exists something for this? (free) - or the only way is buying and larger 1TB hdd and using it for the conversion? Another question are: is anybody have real experience with windows-2003 conversion into VirtualBox? Looking for an answer from someone who really doing it and can figure out real pitfalls. (googling can do myself). exists here better approach for the solution?

    Read the article

  • SSH error: Permission denied, please try again

    - by Kamal
    I am new to ubuntu. Hence please forgive me if the question is too simple. I have a ubuntu server setup using amazon ec2 instance. I need to connect my desktop (which is also a ubuntu machine) to the ubuntu server using SSH. I have installed open-ssh in ubuntu server. I need all systems of my network to connect the ubuntu server using SSH (no need to connect through pem or pub keys). Hence opened SSH port 22 for my static IP in security groups (AWS). My SSHD-CONFIG file is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes Through webmin (Command shell), I have created a new user named 'senthil' and added this new user to 'sudo' group. sudo adduser -y senthil sudo adduser senthil sudo I tried to login using this new user 'senthil' in 'webmin'. I was able to login successfully. When I tried to connect ubuntu server from my terminal through SSH, ssh senthil@SERVER_IP It asked me to enter password. After the password entry, it displayed: Permission denied, please try again. On some research I realized that, I need to monitor my server's auth log for this. I got the following error in my auth log (/var/log/auth.log) Jul 2 09:38:07 ip-192-xx-xx-xxx sshd[3037]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=MY_CLIENT_IP user=senthil Jul 2 09:38:09 ip-192-xx-xx-xxx sshd[3037]: Failed password for senthil from MY_CLIENT_IP port 39116 ssh2 When I tried to debug using: ssh -v senthil@SERVER_IP OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to SERVER_IP [SERVER_IP] port 22. debug1: Connection established. debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file {MY-WORKSPACE}/.ssh/id_rsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_dsa-cert type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa type -1 debug1: identity file {MY-WORKSPACE}/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA {SERVER_HOST_KEY} debug1: Host 'SERVER_IP' is known and matches the ECDSA host key. debug1: Found key in {MY-WORKSPACE}/.ssh/known_hosts:1 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: password debug1: Next authentication method: password senthil@SERVER_IP's password: debug1: Authentications that can continue: password Permission denied, please try again. senthil@SERVER_IP's password: For password, I have entered the same value which I normally use for 'ubuntu' user. Can anyone please guide me where the issue is and suggest some solution for this issue?

    Read the article

  • Problem with apt-get update: failed to fetch error

    - by user171447
    I run an Ubuntu Server 12.04.3 LTS. Today, I wanted to update it, but I did not managed it (yes...), however upgrading worked well. I don't want you to solve my problem but it would be greatful if you could give me some hints. I googled hours, I fould a lot of this kind of errors, but not exactly this. Here is the output of apt-get update: Hit http://filepile.fastit.net precise Release.gpg Hit http://filepile.fastit.net precise Release Hit http://filepile.fastit.net precise/main amd64 Packages Hit http://filepile.fastit.net precise/restricted amd64 Packages Hit http://filepile.fastit.net precise/universe amd64 Packages Hit http://filepile.fastit.net precise/multiverse amd64 Packages Hit http://filepile.fastit.net precise/main i386 Packages Hit http://filepile.fastit.net precise/restricted i386 Packages Hit http://filepile.fastit.net precise/universe i386 Packages Hit http://filepile.fastit.net precise/multiverse i386 Packages Ign http://filepile.fastit.net precise/main TranslationIndex Ign http://filepile.fastit.net precise/multiverse TranslationIndex Ign http://filepile.fastit.net precise/restricted TranslationIndex Ign http://filepile.fastit.net precise/universe TranslationIndex Ign http://filepile.fastit.net precise/main Translation-en_GB Ign http://filepile.fastit.net precise/main Translation-en Ign http://filepile.fastit.net precise/main Translation-en_GB.UTF-8 Hit http://archive.canonical.com precise Release.gpg Ign http://filepile.fastit.net precise/multiverse Translation-en_GB Ign http://filepile.fastit.net precise/multiverse Translation-en Ign http://filepile.fastit.net precise/multiverse Translation-en_GB.UTF-8 Ign http://filepile.fastit.net precise/restricted Translation-en_GB Ign http://filepile.fastit.net precise/restricted Translation-en Ign http://filepile.fastit.net precise/restricted Translation-en_GB.UTF-8 Ign http://filepile.fastit.net precise/universe Translation-en_GB Ign http://filepile.fastit.net precise/universe Translation-en Ign http://filepile.fastit.net precise/universe Translation-en_GB.UTF-8 Hit http://archive.ubuntu.com precise Release.gpg Hit http://archive.canonical.com precise Release Hit http://archive.ubuntu.com precise Release Hit http://archive.ubuntu.com precise/main Sources Hit http://archive.ubuntu.com precise/restricted Sources Hit http://archive.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com precise/restricted i386 Packages Hit http://archive.ubuntu.com precise/multiverse i386 Packages Hit http://archive.ubuntu.com precise/main TranslationIndex Hit http://archive.ubuntu.com precise/multiverse TranslationIndex Hit http://archive.ubuntu.com precise/restricted TranslationIndex Hit http://archive.ubuntu.com precise/main Translation-en_GB Hit http://archive.ubuntu.com precise/main Translation-en Hit http://archive.ubuntu.com precise/multiverse Translation-en_GB Hit http://archive.ubuntu.com precise/multiverse Translation-en Hit http://archive.ubuntu.com precise/restricted Translation-en_GB Hit http://archive.ubuntu.com precise/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-security Release.gpg Hit http://fr.archive.ubuntu.com precise-updates Release.gpg Hit http://fr.archive.ubuntu.com precise-security Release Hit http://fr.archive.ubuntu.com precise-updates Release Hit http://fr.archive.ubuntu.com precise-security/main amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/restricted amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/universe amd64 Packages Hit http://fr.archive.ubuntu.com precise-security/multiverse amd64 Packages :W: Failed to fetch http://archive.canonical.com/dists/precise/Release Unable to find expected entry 'main/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) E: Some index files failed to download. They have been ignored, or old ones used instead. Hit http://fr.archive.ubuntu.com precise-security/main i386 Packages Hit http://fr.archive.ubuntu.com precise-security/restricted i386 Packages Hit http://fr.archive.ubuntu.com precise-security/universe i386 Packages Hit http://fr.archive.ubuntu.com precise-security/multiverse i386 Packages Hit http://fr.archive.ubuntu.com precise-security/main TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/multiverse TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/restricted TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/universe TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/main amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/restricted amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/universe amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/multiverse amd64 Packages Hit http://fr.archive.ubuntu.com precise-updates/main i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/restricted i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/universe i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/multiverse i386 Packages Hit http://fr.archive.ubuntu.com precise-updates/main TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/multiverse TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/restricted TranslationIndex Hit http://fr.archive.ubuntu.com precise-updates/universe TranslationIndex Hit http://fr.archive.ubuntu.com precise-security/main Translation-en Hit http://fr.archive.ubuntu.com precise-security/multiverse Translation-en Hit http://fr.archive.ubuntu.com precise-security/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-security/universe Translation-en Hit http://fr.archive.ubuntu.com precise-updates/main Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/main Translation-en Hit http://fr.archive.ubuntu.com precise-updates/multiverse Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/multiverse Translation-en Hit http://fr.archive.ubuntu.com precise-updates/restricted Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/restricted Translation-en Hit http://fr.archive.ubuntu.com precise-updates/universe Translation-en_GB Hit http://fr.archive.ubuntu.com precise-updates/universe Translation-en And here is my /etc/apt/sources.list: ###### Ubuntu Main Repos deb http://filepile.fastit.net/ubuntu/ precise main restricted universe multiverse # deb http://de.archive.ubuntu.com/ubuntu/ precise main restricted universe multiverse deb http://archive.canonical.com/ precise main restricted universe multiverse ###### Ubuntu Update Repos deb http://fr.archive.ubuntu.com/ubuntu/ precise-security main restricted universe multiverse deb http://fr.archive.ubuntu.com/ubuntu/ precise-updates main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ precise main restricted multiverse deb-src http://archive.ubuntu.com/ubuntu precise main restricted multiverse Thanks for your help!

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • order of operations for environment variables

    - by alyda
    I want to understand how environment variables are set and reset (overridden). I'm running Apache/2.2.24 (Unix) PHP/5.4.14 on a mac . My theory is this: Environment vars can be set in bash, then they can be overwritten with httpd.conf preceding a VirtualHost directive that precedes php.ini, which can then be overwritten by .htaccess (if allowable) and finally by PHP I tried the following: setting environment variable in bash: I added export ENVIRONMENT='local' to my ~/.bashrc file, restarted apache and did not get any output from print_r($_ENV); (in a simple index.php file at the root of my webserver). I also tried putting ENVIRONMENT='local' into /etc/environment, and restarting apache, nothing, as well as /etc/bashrc, restart apache. still nothing. setting environment variable in httpd.conf: I added SetEnv ENVIRONMENT 'local-httpd to the end of my /etc/apache2/httpd.conf file (but before I load other conf files, such as virtual host [Include /private/etc/apache2/other/*.conf]). I now see the variable in the array print_r($_SERVER); but not print_r($_ENV);. setting environment variable in httpd-vhosts.conf: I added SetEnv ENVIRONMENT 'local-vhost to my /etc/apache2/extra/httpd-vhosts.conf file in my generic directive that points to my default document root. I now see the variable has been overwritten (to local-vhost from local-httpd, so I know where the variable is getting set). setting environment variable in php.ini: while searching for a proper place to put my environment variable, I noticed that variables_order = "GPCS" was set to the production value rather than EGPCS. I changed it, restarted my server and found that I was now getting output for print_r($_ENV); but not my expected custom variable. It also appears that I am not able to set a custom variable in this file. Please tell me if I am wrong setting environment variable in .htaccess: I added SetEnv ENVIRONMENT 'local-htaccess'. This worked as expected, overwriting all other values that were set. setting / overwriting environment variable in PHP: if (...) { putenv('ENVIRONMENT=local'); } I'm asking this question because I have a lot of local and remote testing servers, some of which may or may not allow me access to modify httpd, httpd-vhost, php.ini or environment variables. I want to understand what is best for those difference scenarios (shared hosting, heroku, local servers, etc) I obviously don't know how to properly set the environment variable in bash in a way that php can use it, I'd like to know how to do that (as I think Heroku does something similar with heroku config set...)

    Read the article

  • How to correctly partition usb flash drive and which filesystem to choose considering wear leveling?

    - by random1
    Two problems. First one: how to partition the flash drive? I shouldn't need to do this, but I'm no longer sure if my partition is properly aligned since I was forced to delete and create a new partition table after gparted complained when I tried to format the drive from FAT to ext4. The naive answer would be to say "just use default and everything is going to be alright". However if you read the following links you'll know things are not that simple: https://lwn.net/Articles/428584/ and http://linux-howto-guide.blogspot.com/2009/10/increase-usb-flash-drive-write-speed.html Then there is also the issue of cylinders, heads and sectors. Currently I get this: $sfdisk -l -uM /dev/sdd Disk /dev/sdd: 30147 cylinders, 64 heads, 32 sectors/track Warning: The partition table looks like it was made for C/H/S=*/255/63 (instead of 30147/64/32). For this listing I'll assume that geometry. Units = mebibytes of 1048576 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End MiB #blocks Id System /dev/sdd1 1 30146 30146 30869504 83 Linux $fdisk -l /dev/sdd Disk /dev/sdd: 31.6 GB, 31611420672 bytes 255 heads, 63 sectors/track, 3843 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00010c28 So from my current understanding I should align partitions at 4 MiB (currently it's at 1 MiB). But I still don't know how to set the heads and sectors properly for my device. Second problem: file system. From the benchmarks I saw ext4 provides the best performance, however there is the issue of wear leveling. How can I know that my Transcend JetFlash 700's microcontroller provides for wear leveling? Or will I just be killing my drive faster? I've seen a lot of posts on the web saying don't worry the newer drives already take care of that. But I've never seen a single piece of backed evidence of that and at some point people start mixing SSD with USB flash drives technology. The safe option would be to go for ext2, however a serious of tests that I performed showed horrible performance!!! These values are from a real scenario and not some synthetic test: 42 files: 3,429,415,284 bytes copied to flash drive original fat32: 15.1 MiB/s ext4 after new partition table: 10.2 MiB/s ext2 after new partition table: 1.9 MiB/s Please read the links that I posted above before answering. I would also be interested in answers backed up with some references because a lot is said and re-said but then it lacks facts. Thank you for the help.

    Read the article

  • gunicorn + django + nginx unix://socket failed (11: Resource temporarily unavailable)

    - by user1068118
    Running very high volume traffic on these servers configured with django, gunicorn, supervisor and nginx. But a lot of times I tend to see 502 errors. So I checked the nginx logs to see what error and this is what is recorded: [error] 2388#0: *208027 connect() to unix:/tmp/gunicorn-ourapp.socket failed (11: Resource temporarily unavailable) while connecting to upstream Can anyone help debug what might be causing this to happen? This is our nginx configuration: sendfile on; tcp_nopush on; tcp_nodelay off; listen 80 default_server; server_name imp.ourapp.com; access_log /mnt/ebs/nginx-log/ourapp-access.log; error_log /mnt/ebs/nginx-log/ourapp-error.log; charset utf-8; keepalive_timeout 60; client_max_body_size 8m; gzip_types text/plain text/xml text/css application/javascript application/x-javascript application/json; location / { proxy_pass http://unix:/tmp/gunicorn-ourapp.socket; proxy_pass_request_headers on; proxy_read_timeout 600s; proxy_connect_timeout 600s; proxy_redirect http://localhost/ http://imp.ourapp.com/; #proxy_set_header Host $host; #proxy_set_header X-Real-IP $remote_addr; #proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-Forwarded-Proto $my_scheme; #proxy_set_header X-Forwarded-Ssl $my_ssl; } We have configure Django to run in Gunicorn as a generic WSGI application. Supervisord is used to launch the gunicorn workers: home/user/virtenv/bin/python2.7 /home/user/virtenv/bin/gunicorn --config /home/user/shared/etc/gunicorn.conf.py daggr.wsgi:application This is what the gunicorn.conf.py looks like: import multiprocessing bind = 'unix:/tmp/gunicorn-ourapp.socket' workers = multiprocessing.cpu_count() * 3 + 1 timeout = 600 graceful_timeout = 40 Does anyone know where I can start digging to see what might be causing the problem? This is what my ulimit -a output looks like on the server: core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 59481 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 50000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • How to setup phpmyadmin with nginx and access it from http://vps-ip/phpmyadmin

    - by Danny
    The phpmyadmin files are located here /usr/share/phpmyadmin/ And I have this server block code that allows me to access phpmyadmin only from http://vps-ip/: server { listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/phpmyadmin/; index index.php index.html index.htm; server_name ein; location / { root /usr/share/phpmyadmin/; index index index.php; try_files $uri/ $uri /index.php?q=$uri&amp&$args; port_in_redirect off; } location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ { access_log off; log_not_found off; expires max; root /usr/share/phpmyadmin/; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; #NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini fastcgi_pass php; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/$fastcgi_script_name; include fastcgi_params; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 360; fastcgi_read_timeout 360; fastcgi_buffer_size 128k; fastcgi_buffers 8 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } location ~ /.htaccess { deny all; log_not_found off; access_log off; } location ~ /.htpasswd { deny all; log_not_found off; access_log off; } location = /favicon.ico { allow all; log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } } What changes I need to do in order to access phpmyadmin from http://vps-ip/phpmyadmin ?

    Read the article

  • Fusion HCM SaaS – Integration

    - by Kiran Mundy
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Fusion HCM SaaS – Integration A typical implementation pattern we’re seeing with Fusion Apps early adopters is implementing a few Fusion HCM applications that bring the most benefit to their company with the least disruption to existing programs and interfaces. Very often this ends up being Fusion Goals & Performance, Talent, Compensation or Benefits, often with Taleo for recruiting. The implementation picture looks like what you see below: Here, you can see that all the “downstream integrations” from the On-Premise Core HR, are unaffected because the master for employee data is still your On-Premise Core HR system – all updates and new hires are made here (although they may be fed in from Taleo to start with). As a second phase when customers migrate Core HR to Fusion HCM, they have to come up with a strategy to manage integrations to all their downstream applications that require employee details. For customers coming from EBS HR, a short term strategy that allows for minimal impact, is to extract employee data from Fusion (Via HCM Extract), and load the shared EBS HR tables (which are part of an EBS Financials install anyways), and let your downstream integrations continue to function based on this data as shown below. If you are not coming from EBS HR and there are license implications, you may want to consider: Creating an On-Premise warehouse for extracting data from Fusion Apps. Leveraging Fusion Apps Web Services (available to SaaS customers starting R7) to directly retrieve/write data to Fusion Apps. Integration Tools File Based Loader This is the primary mechanism for loading HCM data (both initial load and incremental updates) into Fusion HCM. Employee & related data can be uploaded into Fusion HCM using File Based Loader. Note that ability to schedule File Based Loader to run on a pre-defined schedule will be available as a patch on top of Rel 5. Hr2Hr has been deprecated in favor of File Based Loader, but for existing customers using Hr2Hr, here are some sample scripts that show how to get more informative error messages. They can be run by creating data model sql queries in BI Publisher. The scripts currently have hard coded values for request id and loader batch id, which your developer will need to update to the correct values for you. The BI Publisher Training Session recorded on Apr 18th is available here (under "Recordings"). This will enable a somewhat technical resource to create a data model sql query. Links to Documentation & Traning Reference documentation for File Based Loader on docs.oracle.com FBL 1.1 MOS Doc Id 1533860.1 Sample demo data files for File Based Loader HCM SaaS Integrations ppt and recording. EBS api's Loading Information into EBS Full or Shared HCM This could be candidate information being loaded from Taleo into EBS or  Employee information being loaded from Fusion HCM into an EBS shared HR install (for downstream applications & EBS Financials). Oracle HRMS Product Family Publicly Callable Business Process APIs (A Reference Consolidation) [ID 216838.1] This is a guide to the EBS R12 Integration Repository accessible from an EBS instance. EBS HRMS Publicly Callable Business Process APIs in Release 11i & 12 [ID 121964.1] Fusion HCM Extract Fusion HCM Extract is the primary mechanism used to extract employee information from Fusion HCM. Refer to the "Configure Identity Sync" doc on MOS  for additional mechanisms. Additional documentation (you'll need an oracle.com account to access) HCM Extracts User Guides (Rel 4 & 5) HCM Extract Entity/Attributes (Rel 5) HCM Extract User Guide (Rel 5) If you don’t have an oracle.com account, download the zipped HCM Extract Rel 5 Docs (Click on File --> Download on next screen). View Training Recordings on Fusion HCM Extract Benefits Extract To setup the benefits extract, refer to the following guide. Page 2-15 of the User Documentation describes how to use the benefits extract. Benefit enrollments can also be uploaded into Fusion Benefits. Instructions are here along with a sample upload file. However, if the defined benefits extract does not meet your requirements, you can use BI Publisher (Link to BI Publisher presentation recording from Apr 18th) to create your own version of Benefits extract. You can start with the data model query underlying the benefits extract. Payroll Interface Fusion Payroll Interface enables you to capture personal payroll information, such as earnings and deductions, along with other data from Oracle Fusion Human Capital Management, and send that information to a third-party payroll provider. Documentation: Payroll interface guide Sample file DBI's used for the payroll interface.Usage Patterns always accessible @ http://www.finapps.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Helping install mrcwa and solve problems with f2py in Ubuntu 14.04 LTS

    - by user288160
    I am sorry if this is the wrong section but I am starting to get desperate, please someone help me... I need to install the program mrcwa-20080820 (sourceforge.net/projects/mrcwa/) because a summer project that I am involved. I need to use it together with anaconda (store.continuum.io/cshop/anaconda/), I already installed Anaconda and apparently it is working. When I type: conda --version I got the expected answer. conda 3.5.2 If I tried to import numpy or scipy with python or simple type f2py there are no errors. So far so good. But when I tried to install this program sudo python setup.py install I got these errors: running install running build sh: 1: f2py: not found cp: cannot stat ‘mrcwaf.so’: No such file or directory running build_py running install_lib running install_egg_info Removing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Writing /usr/local/lib/python2.7/dist-packages/mrcwa-20080820.egg-info Obs: I am trying to use intel fortran 64-bits and Ubuntu 14.04 LTS. So I was checking f2py and tried to execute the program hello world f2py -c -m hello hello.f from here: cens.ioc.ee/projects/f2py2e/index.html#usage and I had some problems too: running build running config_cc unifing config_cc, config, build_clib, build_ext, build commands --compiler options running config_fc unifing config_fc, config, build_clib, build_ext, build commands --fcompiler options running build_src build_src building extension "hello" sources f2py options: [] f2py:> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c creating /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 Reading fortran codes... Reading file 'hello.f' (format:fix,strict) Post-processing... Block: hello Block: foo Post-processing (stage 2)... Building modules... Building module "hello"... Constructing wrapper function "foo"... foo(a) Wrote C/API module "hello" to file "/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 /hellomodule.c" adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c' to sources. adding '/tmp/tmpf8P4Y3/src.linux-x86_64-2.7' to include_dirs. copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.c -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 copying /home/felipe/.local/lib/python2.7/site-packages/numpy/f2py/src/fortranobject.h -> /tmp/tmpf8P4Y3/src.linux-x86_64-2.7 build_src: building npy-pkg config files running build_ext customize UnixCCompiler customize UnixCCompiler using build_ext customize Gnu95FCompiler Could not locate executable gfortran Could not locate executable f95 customize IntelFCompiler Found executable /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort customize LaheyFCompiler Could not locate executable lf95 customize PGroupFCompiler Could not locate executable pgfortran customize AbsoftFCompiler Could not locate executable f90 Could not locate executable f77 customize NAGFCompiler customize VastFCompiler customize CompaqFCompiler Could not locate executable fort customize IntelItaniumFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler customize IntelEM64TFCompiler using build_ext building 'hello' extension compiling C sources C compiler: gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC creating /tmp/tmpf8P4Y3/tmp creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3 creating /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local/lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.c:17: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ gcc: /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c In file included from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarraytypes.h:1761:0, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/ndarrayobject.h:17, from /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/arrayobject.h:4, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.h:13, from /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.c:2: /home/felipe/.local/lib/python2.7/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp] #warning "Using deprecated NumPy API, disable it by " \ ^ compiling Fortran sources Fortran f77 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict Fortran f90 compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FR -fPIC -xhost -openmp -fp-model strict Fortran fix compiler: /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -FI -fPIC -xhost -openmp -fp-model strict compile options: '-I/tmp/tmpf8P4Y3/src.linux-x86_64-2.7 -I/home/felipe/.local /lib/python2.7/site-packages/numpy/core/include -I/home/felipe/anaconda/include/python2.7 -c' ifort:f77: hello.f /opt/intel/composer_xe_2013_sp1.3.174/bin/intel64/ifort -shared -shared -nofor_main /tmp/tmpf8P4Y3/tmp/tmpf8P4Y3/src.linux-x86_64-2.7/hellomodule.o /tmp/tmpf8P4Y3 /tmp/tmpf8P4Y3/src.linux-x86_64-2.7/fortranobject.o /tmp/tmpf8P4Y3/hello.o -L/home/felipe /anaconda/lib -lpython2.7 -o ./hello.so Removing build directory /tmp/tmpf8P4Y3 Please help me I am new in ubuntu and python. I really need this program, my advisor is waiting an answer. Thank you very much, Felipe Oliveira.

    Read the article

  • OS X Snow Leopard 10.6 Refuses to Load Websites the first time intermittently

    - by Brandon
    Many times when I am browsing the web, Snow Leopard will sit and load a site for 20 seconds or more, until it times out and says it cannot be displayed. If I refresh, it loads RIGHT away, every time. The issue is intermittent but happens from once every couple of days to a few times a day. So the long and short of it is this: Aluminum MacBook (Non-Pro) 2.4GHz Core2Duo, 4GB DDR3 I am using 10.6.6 but I have had this issue since 10.6.0 It happens in Firefox, Chrome, and Safari I have flushed my DNS (using the command 'blablabla flush') I am using custom DNS servers which I hoped would fix it but it had no effect* I am running Apache currently but haven't been for most of the time I've reformatted multiple times, always experiencing the issue I am on Cox cable internet, with a Motorola Surfboard & a Belkin F6D4230-4 v1 (Pre?) N wireless router. I've put the router in G only & N only & G+N to no effect It seems to be domain dependant as I can sometimes load the Google cache right away, and sometimes other sites will load but Google will refuse My Powerbook G4 with Leopard, other Windows XP laptops, & my wired Win7 desktop do not suffer from the issue. *I recently started using these to escape the awful Cox redirect page on timeouts I'm almost positive the issue has happened on other networks but I can't recall a specific instance (I have a terrible memory). The problem is intermittent and fixable enough (I just have to wait until it times out and hit refresh one time) but incredibly annoying since I'm constantly reading documentation from a large variety of sites. EDIT: To clarify, this happens with ALL sites, not only specific sites. I haven't been able to detect any pattern to the failures, but one day Google.com will refuse to load while reddit.com will, and the next day vice versa. Keep in mind that waiting for a timeout and hitting refresh loads the page right away, every time. If I don't wait for the timeout, opening more links, hitting refresh, and clicking the link a billion times have no effect. It seems to be domain neutral, affecting sites seemingly at random. It doesn't seem to have anything to do with connection inactivity either, because I will be SSHed into different servers, uploading files, browsing, downloading, etc, and it will just quit loading Jquery.com (for example) until I sit and wait for a timeout. /EDIT This is my last resort. Please, someone, tell me what is happening. Thank you.

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • How do I get the latest FastCGI and PHP versions to peacefully coexist on IIS 6?

    - by BHelman
    I have been going round and round trying to get any sort of PHP running on IIS 6. I somehow managed to successfully get version 5.1.4 running using the php5isapi.dll file. However, I want to upgrade a website to begin using a Content Management System. I have never dug into CMS before so I'm open to programs that are easy to use. I am currently looking into TomatoCMS and ImpressCMS - but that's beside the point. I have never done an installation with PHP before and I think I'm getting familiar with how it works. However the current situation is this. Microsoft's Web Platform Installer 2.0 installed FastCGI for me. I need to upgrade to PHP 5.3.1 for a CMS system. So I downloaded the Windows installer and let it go at it. After consulting several other blog articles, I believe I know how it is supposed to work but I am currently not having luck. THE SETUP *.php is a registered extension in IIS 6 for all websites (on Win 2k3). The application that it calls is C:\Windows\system32\inetsvr\fcgiext.dll, like it should. The fcgiext.ini config has the proper lines: [Types] php=PHP [PHP] ext=C:\program files\PHP\php-cgi.exe And the php.ini file also has the correct configs. All extensions are disabled and I changed the correct things for FastCGI. And everything is registered correctly with the PATH variable. Everything is exactly how it should be. BUT when I launch the "info.php" page () on another computer, I get the following error: FastCGI Error The FastCGI Handler was unable to process the request. Error Details: * Section [PHP] not found in config file. * Error Number: 1413 (0x80070585). * Error Description: Invalid index. HTTP Error 500 - Server Error. Internet Information Services (IIS) A quick Google search reveals that I have it all setup correctly as far as the INI's go and the mapping of the php extension. I am completely at a loss. Does anyone have any suggestions? Although the server is hosting three small websites, I don't really care what I have to do to it to get it to work.

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • How to Buy an SD Card: Speed Classes, Sizes, and Capacities Explained

    - by Chris Hoffman
    Memory cards are used in digital cameras, music players, smartphones, tablets, and even laptops. But not all SD cards are created equal — there are different speed classes, physical sizes, and capacities to consider. Different devices require different types of SD cards. Here are the differences you’ll need to keep in mind when picking out the right SD card for your device. Speed Class In a nutshell, not all SD cards offer the same speeds. This matters for some tasks more than it matters for others. For example, if you’re a professional photographer taking photos in rapid succession on a DSLR camera saving them in high-resolution RAW format, you’ll want a fast SD card so your camera can save them as fast as possible. A fast SD card is also important if you want to record high-resolution video and save it directly to the SD card. If you’re just taking a few photos on a typical consumer camera or you’re just using an SD card to store some media files on your smartphone, the speed isn’t as important. Manufacturers use “speed classes” to measure an SD card’s speed. The SD Association that defines the SD card standard doesn’t actually define the exact speeds associated with these classes, but they do provide guidelines. There are four different speed classes — 10, 8, 4, and 2. 10 is the fastest, while 2 is the slowest. Class 2 is suitable for standard definition video recording, while classes 4 and 6 are suitable for high-definition video recording. Class 10 is suitable for “full HD video recording” and “HD still consecutive recording.” There are also two Ultra High Speed (UHS) speed classes, but they’re more expensive and are designed for professional use. UHS cards are designed for devices that support UHS. Here are the associated logos, in order from slowest to fastest:       You’ll probably be okay with a class 4 or 6 card for typical use in a digital camera, smartphone, or tablet. Class 10 cards are ideal if you’re shooting high-resolution videos or RAW photos. Class 2 cards are a bit on the slow side these days, so you may want to avoid them for all but the cheapest digital cameras. Even a cheap smartphone can record HD video, after all. An SD card’s speed class is identified on the SD card itself. You’ll also see the speed class on the online store listing or on the card’s packaging when purchasing it. For example, in the below photo, the middle SD card is speed class 4, while the two other cards are speed class 6. If you see no speed class symbol, you have a class 0 SD card. These cards were designed and produced before the speed class rating system was introduced. They may be slower than even a class 2 card. Physical Size Different devices use different sizes of SD cards. You’ll find standard-size CD cards, miniSD cards, and microSD cards. Standard SD cards are the largest, although they’re still very small. They measure 32x24x2.1 mm and weigh just two grams. Most consumer digital cameras for sale today still use standard SD cards. They have the standard “cut corner”  design. miniSD cards are smaller than standard SD cards, measuring 21.5x20x1.4 mm and weighing about 0.8 grams. This is the least common size today. miniSD cards were designed to be especially small for mobile phones, but we now have a smaller size. microSD cards are the smallest size of SD card, measuring 15x11x1 mm and weighing just 0.25 grams. These cards are used in most cell phones and smartphones that support SD cards. They’re also used in many other devices, such as tablets. SD cards will only fit into marching slots. You can’t plug a microSD card into a standard SD card slot — it won’t fit. However, you can purchase an adapter that allows you to plug a smaller SD card into a larger SD card’s form and fit it into the appropriate slot. Capacity Like USB flash drives, hard drives, solid-state drives, and other storage media, different SD cards can have different amounts of storage. But the differences between SD card capacities don’t stop there. Standard SDSC (SD) cards are 1 MB to 2 GB in size, or perhaps 4 GB in size — although 4 GB is non-standard. The SDHC standard was created later, and allows cards 2 GB to 32 GB in size. SDXC is a more recent standard that allows cards 32 GB to 2 TB in size. You’ll need a device that supports SDHC or SDXC cards to use them. At this point, the vast majority of devices should support SDHC. In fact, the SD cards you have are probably SDHC cards. SDXC is newer and less common. When buying an SD card, you’ll need to buy the right speed class, size, and capacity for your needs. Be sure to check what your device supports and consider what speed and capacity you’ll actually need. Image Credit: Ryosuke SEKIDO on Flickr, Clive Darra on Flickr, Steven Depolo on Flickr

    Read the article

  • Unable to sign in. How to debug?

    - by Dmitriy Budnik
    I had to reboot system with reset button. After reboot I can't sign in. When I enter my password It seems like X-server just restarts. I can sing in as guest and also I can sign in in text TTY. Here is first 150 lines of my lightdm.log: [+0.04s] DEBUG: Logging to /var/log/lightdm/lightdm.log [+0.04s] DEBUG: Starting Light Display Manager 1.2.1, UID=0 PID=1070 [+0.04s] DEBUG: Loaded configuration from /etc/lightdm/lightdm.conf [+0.04s] DEBUG: Using D-Bus name org.freedesktop.DisplayManager [+0.04s] DEBUG: Registered seat module xlocal [+0.04s] DEBUG: Registered seat module xremote [+0.04s] DEBUG: Adding default seat [+0.04s] DEBUG: Starting seat [+0.04s] DEBUG: Starting new display for automatic login as user dmytro [+0.04s] DEBUG: Starting local X display [+3.64s] DEBUG: X server :0 will replace Plymouth [+3.66s] DEBUG: Using VT 7 [+3.66s] DEBUG: Activating VT 7 [+3.66s] DEBUG: Logging to /var/log/lightdm/x-0.log [+3.66s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+3.66s] DEBUG: Launching X Server [+3.66s] DEBUG: Launching process 1154: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+3.66s] DEBUG: Waiting for ready signal from X server :0 [+3.66s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+3.66s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+10.78s] DEBUG: Got signal 10 from process 1154 [+10.78s] DEBUG: Got signal from X server :0 [+10.78s] DEBUG: Stopping Plymouth, X server is ready [+10.80s] DEBUG: Connecting to XServer :0 [+10.80s] DEBUG: Automatically logging in user dmytro [+10.80s] DEBUG: Started session 1303 with service 'lightdm-autologin', username 'dmytro' [+13.22s] DEBUG: Session 1303 authentication complete with return value 0: Success [+13.26s] DEBUG: Autologin user dmytro authorized [+13.27s] DEBUG: Autologin using session ubuntu [+14.44s] DEBUG: Dropping privileges to uid 1000 [+14.48s] DEBUG: Restoring privileges [+14.49s] DEBUG: Dropping privileges to uid 1000 [+14.49s] DEBUG: Writing /home/dmytro/.dmrc [+14.61s] DEBUG: Restoring privileges [+14.81s] DEBUG: Starting session ubuntu as user dmytro [+14.81s] DEBUG: Session 1303 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+15.76s] DEBUG: New display ready, switching to it [+15.76s] DEBUG: Activating VT 7 [+15.76s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session0 [+16.63s] DEBUG: Session 1303 exited with return value 0 [+16.63s] DEBUG: User session quit [+16.63s] DEBUG: Stopping display [+16.63s] DEBUG: Sending signal 15 to process 1154 [+17.19s] DEBUG: Process 1154 exited with return value 0 [+17.19s] DEBUG: X server stopped [+17.19s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+17.19s] DEBUG: Releasing VT 7 [+17.19s] DEBUG: Display server stopped [+17.19s] DEBUG: Display stopped [+17.19s] DEBUG: Active display stopped, switching to greeter [+17.19s] DEBUG: Switching to greeter [+17.19s] DEBUG: Starting new display for greeter [+17.19s] DEBUG: Starting local X display [+17.19s] DEBUG: Using VT 7 [+17.19s] DEBUG: Logging to /var/log/lightdm/x-0.log [+17.19s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+17.19s] DEBUG: Launching X Server [+17.19s] DEBUG: Launching process 1563: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+17.19s] DEBUG: Waiting for ready signal from X server :0 [+17.48s] DEBUG: Got signal 10 from process 1563 [+17.48s] DEBUG: Got signal from X server :0 [+17.48s] DEBUG: Connecting to XServer :0 [+17.48s] DEBUG: Starting greeter [+17.48s] DEBUG: Started session 1575 with service 'lightdm', username 'lightdm' [+17.61s] DEBUG: Session 1575 authentication complete with return value 0: Success [+17.61s] DEBUG: Greeter authorized [+17.61s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+17.68s] DEBUG: Session 1575 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+20.86s] DEBUG: Greeter connected version=1.2.1 [+20.86s] DEBUG: Greeter connected, display is ready [+20.86s] DEBUG: New display ready, switching to it [+20.86s] DEBUG: Activating VT 7 [+20.86s] DEBUG: Stopping greeter display being switched from [+24.90s] DEBUG: Greeter start authentication for dmytro [+24.90s] DEBUG: Started session 1746 with service 'lightdm', username 'dmytro' [+25.10s] DEBUG: Session 1746 got 1 message(s) from PAM [+25.10s] DEBUG: Prompt greeter with 1 message(s) [+31.87s] DEBUG: Continue authentication [+33.75s] DEBUG: Session 1746 authentication complete with return value 7: Authentication failure [+33.75s] DEBUG: Authenticate result for user dmytro: Authentication failure [+33.75s] DEBUG: Greeter start authentication for dmytro [+33.75s] DEBUG: Session 1746: Sending SIGTERM [+33.75s] DEBUG: Started session 2264 with service 'lightdm', username 'dmytro' [+33.75s] DEBUG: Session 2264 got 1 message(s) from PAM [+33.75s] DEBUG: Prompt greeter with 1 message(s) [+36.41s] DEBUG: Continue authentication [+36.53s] DEBUG: Session 2264 authentication complete with return value 0: Success [+36.53s] DEBUG: Authenticate result for user dmytro: Success [+36.54s] DEBUG: User dmytro authorized [+36.54s] DEBUG: Greeter requests session ubuntu [+36.54s] DEBUG: Using session ubuntu [+36.54s] DEBUG: Stopping greeter [+36.54s] DEBUG: Session 1575: Sending SIGTERM [+37.41s] DEBUG: Greeter closed communication channel [+37.41s] DEBUG: Session 1575 exited with return value 0 [+37.41s] DEBUG: Greeter quit [+37.42s] DEBUG: Dropping privileges to uid 1000 [+37.42s] DEBUG: Restoring privileges [+37.43s] DEBUG: Dropping privileges to uid 1000 [+37.43s] DEBUG: Writing /home/dmytro/.dmrc [+38.35s] DEBUG: Restoring privileges [+40.37s] DEBUG: Starting session ubuntu as user dmytro [+40.37s] DEBUG: Session 2264 running command /usr/sbin/lightdm-session gnome-session --session=ubuntu [+40.39s] DEBUG: Registering session with bus path /org/freedesktop/DisplayManager/Session1 [+50.78s] DEBUG: Session 2264 exited with return value 0 [+50.78s] DEBUG: User session quit [+50.78s] DEBUG: Stopping display [+50.78s] DEBUG: Sending signal 15 to process 1563 [+51.53s] DEBUG: Process 1563 exited with return value 0 [+51.53s] DEBUG: X server stopped [+51.53s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+51.53s] DEBUG: Releasing VT 7 [+51.53s] DEBUG: Display server stopped [+51.53s] DEBUG: Display stopped [+51.53s] DEBUG: Active display stopped, switching to greeter [+51.53s] DEBUG: Switching to greeter [+51.53s] DEBUG: Starting new display for greeter [+51.53s] DEBUG: Starting local X display [+51.53s] DEBUG: Using VT 7 [+51.53s] DEBUG: Logging to /var/log/lightdm/x-0.log [+51.53s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+51.53s] DEBUG: Launching X Server [+51.53s] DEBUG: Launching process 2894: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch [+51.53s] DEBUG: Waiting for ready signal from X server :0 [+51.75s] DEBUG: Got signal 10 from process 2894 [+51.75s] DEBUG: Got signal from X server :0 [+51.75s] DEBUG: Connecting to XServer :0 [+51.75s] DEBUG: Starting greeter [+51.75s] DEBUG: Started session 2898 with service 'lightdm', username 'lightdm' [+51.76s] DEBUG: Session 2898 authentication complete with return value 0: Success [+51.76s] DEBUG: Greeter authorized [+51.76s] DEBUG: Logging to /var/log/lightdm/x-0-greeter.log [+51.76s] DEBUG: Session 2898 running command /usr/lib/lightdm/lightdm-greeter-session /usr/sbin/unity-greeter [+53.26s] DEBUG: Greeter connected version=1.2.1 [+53.26s] DEBUG: Greeter connected, display is ready [+53.26s] DEBUG: New display ready, switching to it [+53.26s] DEBUG: Activating VT 7 [+53.26s] DEBUG: Stopping greeter display being switched from [+54.17s] DEBUG: Greeter start authentication for dmytro [+54.17s] DEBUG: Started session 3152 with service 'lightdm', username 'dmytro' [+54.18s] DEBUG: Session 3152 got 1 message(s) from PAM [+54.18s] DEBUG: Prompt greeter with 1 message(s) [+58.61s] DEBUG: Continue authentication [+58.65s] DEBUG: Session 3152 authentication complete with return value 0: Success [+58.65s] DEBUG: Authenticate result for user dmytro: Success [+58.66s] DEBUG: User dmytro authorized [+58.66s] DEBUG: Greeter requests session ubuntu [+58.66s] DEBUG: Using session ubuntu [+58.66s] DEBUG: Stopping greeter [+58.66s] DEBUG: Session 2898: Sending SIGTERM How can I fix it? What other .log files could possibly give me a clue? Update: Possibly it's duplicate of Desktop login fails, terminal works

    Read the article

  • Ubuntu Natty: 32-bit userland, 64-bit kernel?

    - by dsimcha
    I'm trying to manually install a 64-bit kernel for 32-bit Ubuntu. I have my reasons for doing so, but they're too complicated to explain here. Prior to Natty, this worked fine. Now, on Natty, I get the following error message when I try doing it the same way: dsimcha@dsimcha-laptop:~$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb [sudo] password for dsimcha: dpkg: error processing linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb (--install): cannot access archive: No such file or directory Errors were encountered while processing: linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dsimcha@dsimcha-laptop:~$ cd Downloads/ dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) (Reading database ... 159153 files and directories currently installed.) Preparing to replace linux-image-2.6.38-8-server:amd64 2.6.38-8.42 (using linux-image-2.6.38-8-server_2.6.38-8.42_amd64.deb) ... Done. Unpacking replacement linux-image-2.6.38-8-server:amd64 ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.38-8-server /boot/vmlinuz-2.6.38-8-server dpkg: dependency problems prevent configuration of linux-image-2.6.38-8-server:amd64: linux-image-2.6.38-8-server:amd64 depends on initramfs-tools (>= 0.36ubuntu6). linux-image-2.6.38-8-server:amd64 depends on coreutils | fileutils (>= 4.0); however: Package coreutils:amd64 is not installed. linux-image-2.6.38-8-server:amd64 depends on module-init-tools (>= 3.3-pre11-4ubuntu3); however: linux-image-2.6.38-8-server:amd64 depends on wireless-crda; however: dpkg: error processing linux-image-2.6.38-8-server:amd64 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: linux-image-2.6.38-8-server:amd64 When I try the dependencies manually, I get, for example: dsimcha@dsimcha-laptop:~/Downloads$ sudo dpkg -i --force-architecture coreutils_8.5-1ubuntu6_amd64.deb dpkg: warning: overriding problem because --force enabled: package architecture (amd64) does not match system (i386) dpkg: error processing coreutils_8.5-1ubuntu6_amd64.deb (--install): coreutils:amd64 8.5-1ubuntu6 (Multi-Arch: no) is not co-installable with coreutils:i386 8.5-1ubuntu6 (Multi-Arch: no) which is currently installed Errors were encountered while processing: coreutils_8.5-1ubuntu6_amd64.deb Has anyone had any success installing 64-bit kernels on 32-bit Natty? If so, how can this be done?

    Read the article

  • Globe SSL with NGINX SSL certificate problem, please help

    - by PartySoft
    Hello, I have a big problem with installing a certificat for nginx (same happends with apache though) I have 3 files __domain_com.crt __domain_com.ca-bundle and ssl.key. I tried to append cat __domain_com.crt __leechpack_com.ca-bundle bundle.crt but if I do it like this i get an error: [emerg]: SSL_CTX_use_certificate_chain_file("/etc/nginx/__leechpack_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line error:140DC009:SSL routines:SSL_CTX_use_certificate_chain_file:PEM lib) And that's because the delimiters of the certificates arren't separated. ZqTjb+WBJQ== -----END CERTIFICATE----------BEGIN CERTIFICATE----- MIIE6DCCA9CgAwIBAgIQdIYhlpUQySkmKUvMi/gpLDANBgkqhkiG9w0BAQUFADBv If i separate them with an enter between certificated it will at least start but i will get the same warning from Firefox: This Connection is Untrusted You have asked Firefox to connect securely to domain.com, but we can't confirm that your connection is secure. The concatenate solution it is given by Globe SSL and the NGINX site but it doesn't work. I think the bundle is ignored though. http://customer.globessl.com/knowledgebase/55/Certificate-Installation--Nginx.html http://nginx.org/en/docs/http/configuring_https_servers.html#chains%20http://wiki.nginx.org/NginxHttpSslModule if i do openssl s_client -connect down.leechpack.com:443 CONNECTED(00000003) depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=27:certificate not trusted verify return:1 depth=0 /OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com i:/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA 1 s:/C=US/O=Globe Hosting, Inc./OU=GlobeSSL DV Certification Authority/CN=GlobeSSL CA i:/C=SE/O=AddTrust AB/OU=AddTrust External TTP Network/CN=AddTrust External CA Root --- Server certificate -----BEGIN CERTIFICATE----- MIIFQzCCBCugAwIBAgIQRnpCmtwX7z7GTla0QktE6DANBgkqhkiG9w0BAQUFADBl MQswCQYDVQQGEwJSTzEuMCwGA1UEChMlR0xPQkUgSE9TVElORyBDRVJUSUZJQ0FU SU9OIEFVVEhPUklUWTEmMCQGA1UEAxMdR0xPQkUgU1NMIERvbWFpbiBWYWxpZGF0 ZWQgQ0EwHhcNMTAwMjExMDAwMDAwWhcNMTEwMjExMjM1OTU5WjCBjTEhMB8GA1UE CxMYRG9tYWluIENvbnRyb2wgVmFsaWRhdGVkMSgwJgYDVQQLEx9Qcm92aWRlZCBi eSBHbG9iZSBIb3N0aW5nLCBJbmMuMSQwIgYDVQQLExtHbG9iZSBTdGFuZGFyZCBX aWxkY2FyZCBTU0wxGDAWBgNVBAMUDyoubGVlY2hwYWNrLmNvbTCCASIwDQYJKoZI hvcNAQEBBQADggEPADCCAQoCggEBAKX7jECMlYEtcvqVWQVUpXNxO/VaHELghqy/ Ml8dOfOXG29ZMZsKUMqS0jXEwd+Bdpm31lBxOALkj8o79hX0tspLMjgtCnreaker 49y62BcjfguXRFAaiseXTNbMer5lDWiHlf1E7uCoTTiczGqBNfl6qSJlpe4rYBtq XxBAiygaNba6Owghuh19+Uj8EICb2pxbJNFfNzU1D9InFdZSVqKHYBem4Cdrtxua W4+YONsfLnnfkRQ6LOLeYExHziTQhSavSv9XaCl9Zqzm5/eWbQqLGRpSJoEPY/0T GqnmeMIq5M35SWZgOVV10j3pOCS8o0zpp7hMJd2R/HwVaPCLjukCAwEAAaOCAcQw ggHAMB8GA1UdIwQYMBaAFB9UlnKtPUDnlln3STFTCWb5DWtyMB0GA1UdDgQWBBT0 8rPIMr7JDa2Xs5he5VXAvMWArjAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIw ADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwVQYDVR0gBE4wTDBKBgsr BgEEAbIxAQICGzA7MDkGCCsGAQUFBwIBFi1odHRwOi8vd3d3Lmdsb2Jlc3NsLmNv bS9kb2NzL0dsb2JlU1NMX0NQUy5wZGYwRgYDVR0fBD8wPTA7oDmgN4Y1aHR0cDov L2NybC5nbG9iZXNzbC5jb20vR0xPQkVTU0xEb21haW5WYWxpZGF0ZWRDQS5jcmww dwYIKwYBBQUHAQEEazBpMEEGCCsGAQUFBzAChjVodHRwOi8vY3J0Lmdsb2Jlc3Ns LmNvbS9HTE9CRVNTTERvbWFpblZhbGlkYXRlZENBLmNydDAkBggrBgEFBQcwAYYY aHR0cDovL29jc3AuZ2xvYmVzc2wuY29tMCkGA1UdEQQiMCCCDyoubGVlY2hwYWNr LmNvbYINbGVlY2hwYWNrLmNvbTANBgkqhkiG9w0BAQUFAAOCAQEAB2Y7vQsq065K s+/n6nJ8ZjOKbRSPEiSuFO+P7ovlfq9OLaWRHUtJX0sLntnWY1T9hVPvS5xz/Ffl w9B8g/EVvvfMyOw/5vIyvHq722fAAC1lWU1rV3ww0ng5bgvD20AgOlIaYBvRq8EI 5Dxo2og2T1UjDN44GOSWsw5jetvVQ+SPeNPQLWZJS9pNCzFQ/3QDWNPOvHqEeRcz WkOTCqbOSZYvoSPvZ3APh+1W6nqiyoku/FCv9otSCtXPKtyVa23hBQ+iuxqIM4/R gncnUKASi6KQrWMQiAI5UDCtq1c09uzjw+JaEzAznxEgqftTOmXAJSQGqZGd6HpD ZqTjb+WBJQ== -----END CERTIFICATE----- subject=/OU=Domain Control Validated/OU=Provided by Globe Hosting, Inc./OU=Globe Standard Wildcard SSL/CN=*.domain.com issuer=/C=RO/O=GLOBE HOSTING CERTIFICATION AUTHORITY/CN=GLOBE SSL Domain Validated CA --- No client certificate CA names sent --- SSL handshake has read 3313 bytes and written 343 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES256-SHA Session-ID: 5F9C8DC277A372E28A4684BAE5B311533AD30E251369D144A13DECA3078E067F Session-ID-ctx: Master-Key: 9B531A75347E6E7D19D95365C1208F2ED37E4004AA8F71FC614A18937BEE2ED9F82D58925E0B3931492AD3D2AA6EFD3B Key-Arg : None Start Time: 1288618211 Timeout : 300 (sec) Verify return code: 21 (unable to verify the first certificate) ---

    Read the article

< Previous Page | 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667  | Next Page >