Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 294/2058 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Is it possible to copy U1 files between two PCs locally when an Ubuntu user account is recreated?

    - by Federico Ghigo
    I have 2 Ubuntu 11.04 PCs (desktop and laptop), both synced via U1. Recently I completely rebuilt the user account (completely deleted the home directory) on one of the two (the desktop), and now I have to resync. Problem is that the PC is on a slow connection and I have difficulties resyncing the 14gb of data. Of course I have the opportunity of moving it to a place where the connection is faster, but it's not comfortable, and it will take some time. Since the laptop is in sync with the account I was wondering if stopping the service and copying the files + some database (which files is the question) could let me avoid resyncing everything.....

    Read the article

  • nginx+php-fpm help optimize configs

    - by Dmitro
    I have 3 servers. First server (CPU - model name: 06/17, 2.66GHz, 4 cores, 8GB RAM) have nginx as load balancer with next config upstream lb_mydomain { server mydomain.ru:81 weight=2; server 66.0.0.18 weight=6; } server { listen 80; server_name ~(?!mydomain.ru)(.*); client_max_body_size 20m; location / { proxy_pass http://lb_mydomain; proxy_redirect off; proxy_set_header Connection close; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass_header Set-Cookie; proxy_pass_header P3P; proxy_pass_header Content-Type; proxy_pass_header Content-Disposition; proxy_pass_header Content-Length; } } And configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 80 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 pm.status_path = /status ping.path = /ping ;ping.response = pong request_terminate_timeout = 30s request_slowlog_timeout = 10s slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 20 php-fpm processes which use from 1% - 15% CPU. So it's have high load averadge: top - 15:36:22 up 34 days, 20:54, 1 user, load average: 5.98, 7.75, 8.78 Tasks: 218 total, 1 running, 217 sleeping, 0 stopped, 0 zombie Cpu(s): 34.1%us, 3.2%sy, 0.0%ni, 37.0%id, 24.8%wa, 0.0%hi, 0.9%si, 0.0%st Mem: 8183228k total, 7538584k used, 644644k free, 351136k buffers Swap: 9936892k total, 14636k used, 9922256k free, 990540k cached Second server(CPU - model name: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz, 8 cores, 8GB RAM). Nginx configs from nginx.conf: user www-data; worker_processes 5; # worker_priority -1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 5024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; default_type application/octet-stream; #tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; # PHP-FPM (backend) upstream php-fpm { server 127.0.0.1:9000; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } And config of php-fpm: listen = 127.0.0.1:9000 ;listen.backlog = -1 ;listen.allowed_clients = 127.0.0.1 ;listen.owner = www-data ;listen.group = www-data ;listen.mode = 0666 user = www-data group = www-data pm = dynamic pm.max_children = 50 ;pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 ;pm.max_requests = 500 ;pm.status_path = /status ;ping.path = /ping ;ping.response = pong ;request_terminate_timeout = 0 ;request_slowlog_timeout = 0 ;slowlog = /var/log/php-fpm.log.slow ;rlimit_files = 1024 ;rlimit_core = 0 ;chroot = chdir = /var/www ;catch_workers_output = yes ;env[HOSTNAME] = $HOSTNAME ;env[PATH] = /usr/local/bin:/usr/bin:/bin ;env[TMP] = /tmp ;env[TMPDIR] = /tmp ;env[TEMP] = /tmp ;php_admin_value[sendmail_path] = /usr/sbin/sendmail -t -i -f [email protected] ;php_flag[display_errors] = off ;php_admin_value[error_log] = /var/log/fpm-php.www.log ;php_admin_flag[log_errors] = on ;php_admin_value[memory_limit] = 32M In top I see 50 php-fpm processes which use from 10% - 25% CPU. So it's have high load averadge: top - 15:53:05 up 33 days, 1:15, 1 user, load average: 41.35, 40.28, 39.61 Tasks: 239 total, 40 running, 199 sleeping, 0 stopped, 0 zombie Cpu(s): 96.5%us, 3.1%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.4%si, 0.0%st Mem: 8185560k total, 7804224k used, 381336k free, 161648k buffers Swap: 19802108k total, 16k used, 19802092k free, 5068112k cached Third server is server with database postgresql. Also i try ab -n 50 -c 5 http://www.mydomain.ru/ And I get next info: Complete requests: 50 Failed requests: 48 (Connect: 0, Receive: 0, Length: 48, Exceptions: 0) Write errors: 0 Total transferred: 9271367 bytes HTML transferred: 9247767 bytes Requests per second: 1.02 [#/sec] (mean) Time per request: 4882.427 [ms] (mean) Time per request: 976.486 [ms] (mean, across all concurrent requests) Transfer rate: 185.44 [Kbytes/sec] received Please advise how can I make lower level of load average?

    Read the article

  • rsnapshot schedule overlapping, help with backup schedule

    - by Znarkus
    Hello, I have to following configuration. rsnapshot.conf interval halfhourly 4 interval hourly 6 interval twohourly 12 interval daily 7 interval weekly 4 crontab 0,30 * * * * /usr/bin/rsnapshot halfhourly >> /var/log/rsnapshot.halfhourly.log 2>&1 5 * * * * /usr/bin/rsnapshot hourly >> /var/log/rsnapshot.hourly.log 2>&1 10 */2 * * * /usr/bin/rsnapshot twohourly >> /var/log/rsnapshot.twohourly.log 2>&1 15 3 * * * /usr/bin/rsnapshot daily >> /var/log/rsnapshot.daily.log 2>&1 20 6 * * MON /usr/bin/rsnapshot weekly >> /var/log/rsnapshot.weekly.log 2>&1 Only halfhourly is running correctly now. hourly spits out this error: rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: Lockfile /var/run/rsnapshot.pid exists and so does its process, can not continue To me it seems like my 5 min space between halfhourly and hourly is too small. Is this configuration crazy? I like having backups every thirty minutes, that will probably save my ass some day. Please help me make a decent backup schedule, that doesn't clog up the system, but creates frequent enough backups. Thank you.

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • How can I recursively change the permissions of files and directories?

    - by Nikhil
    I have ubuntu installed on my local computer with apache / php / mysql. I now have a directory at /var/www - inside which I have several of my ongoing projects. I also work with opensource ( drupal, magento, sugarcrm ). The problem I am facing is changing file permission with terminal. Sometime I need to change the permission of entire folder and its subsequent sub-folders and files. I have to individually change using sudo chmod 777 foldername How can I do this recursively. Also why do I have to always do it 777, I tried 755 for folders and 644 for files, but that won't work.

    Read the article

  • MOSS 2007 WSP Retraction 'Error"

    - by juanlarios
    This one is a quick post , but I thought I would post this information as I could not find anything that helped me on this specific scenario. Please read the entire article before taking action as there are some irreversable or very troublesome routes I caution about! Problem: I had a client trying to retract a WSP from Central Admin and would eventually go to an, 'Error' State. I could not retract it and after looking at event logs I figured it was a problem with security. I tried several accounts, checked the databases to see if there was some issue with readonly databases and nothing was working.   Solution: Delete the solution from central admin! Yes, I said it. With StsAdm , just delete the solution from Central Admin using this command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deletesolution -name "yoursolution.wsp" What has just happened is that Central Admin does not know about the WSP anymore but the feature and any deployed files are still on the server. For whatever reason SharePoint was not able to retract the files as it normally does. Now you can do one of two things, you can add the solution again to central admin and deploy overtop of the deployed files so it overrides them, or simply clean up the files manually. I re-added the solution through stsadm, but then deployed through stsadm using the -force option in the command. This overrides the existing files on the server. If you deploy through Central admin it will tell you you need the -force option that is not offered as part of the UI in central admin. Use the following command: "C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN\STSADM.exe" -o deploysolution -name "YourSolution.wsp" -immediate -allowgacdeployment -force Just to make sure everything was good, I retracted to solution again, and it worked! then I deleted the solution from central admin alltogether. Then I checked the server and noticed all the files that were deployed with the WSP were cleaned up properly. I then re-added the new WSP the client was looking to install (an Updated WSP). Conclusion: I have no idea why it was not able to retract, but I have seen this several times. I don't know if has to do with security of certain accounts. Althought it's anoying at times, it is fairly easy to fix if you have good instructions. Hope it helps you out!   ***WORD OF CAUTION - if you clean up the files manually you might want to uninstall the features through STSADM commands as SharePOint might still recognize the features that were deployed as the WSP. You might not want to get into the mess of deleting files that are still part of activated or installed Features. THis is why I suggest doing what I did.

    Read the article

  • Windows XP: How to delete files and folders that cannot be deleted?

    - by glenneroo
    I have a backup copy of a previous Windows' Documents and Settings folder which only contains my original user and within 2 more directories: Favorites and Local Settings. When I try to delete Local Settings I get this error: When I try to delete Favorites, I get this error: I ran this in a cmd shell: attrib *.* -r -a -s -h /s ...but it did not help, nor did it return any errors/warnings. I used Unlocker v1.8.5 and LockHunter repeatedly at multiple levels to see if any files are in use, but both always say: No Files Locked. Update #1: I was able to rename the directory, which now gives me this warning before (trying to) delete: If I press Yes (or Yes to All) then I get this error: Update #2: I let chkdsk /f run which required a reboot since it's on my primary system partition. During Stage 2 scanning, I received about 40 of these: Deleting an index entry from index $0 of file 25. ...followed by: Deleting index entry cookies in index $I30 of file 37576. ...but I still get the first error dialog above when trying to delete. Update #3: Digging deeper, the 99 is the name of one of many directories located deep in here: C:\Documents and Settings.OLD\User\Local Settings\Application Data\Microsoft\Messenger\[email protected]\SharingMetadata\[email protected]\DFSR\Staging\CS{D4E4AE55-B5E2-F03B-5189-6C4DA6E41788}\ Inside each of those directories were files with names such as: 2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-{C93D01AC-0739-4FD9-88C7-13D2F21A208E}-v2300-Downloaded.frx I noticed that, unlike all the directories, I couldn't rename any of these files. I also noticed that the file + dir names were extremely long: Original directory = 194 characters Filenames = 100+ characters Together the length exceeds the 255-char limit which is bad and would explain the error message I posted in Update #1. Partial Solution: Rename all directories until the total path length is less than 100. Afterwards I was able to rename the .frx files, not to mention delete everything inside the Local Settings directory. This is only a partial solution because this (empty) directory is still undeleteable: C:\1\2\Favorites\Wien\What To Do.. I'm guessing because of the ".." at the end, Windows (Explorer and cmd) can't deal with it: Here is what Explorer properties shows: Any ideas?

    Read the article

  • Why are my files in /var/lock and where did they just go?!

    - by Nicky Hajal
    I am hosting a website on Debian 5.0 & Apache2. Today one of my websites was down, Apache said it couldn't find the directory. I located the files and the whole site once in /var/www/site was now /var/lock/site. All the files were present. I was confused, but figured I'd just move it back. mv /var/lock/site /var/www All looked fine... Except that only the directories moved and the files appear to be lost! I am working on restoring from backups but I would really love to know what happened and where my files went (the backups are a few days old). Thanks for your help!

    Read the article

  • How to get files that are dragged on Aquamacs (or opened via Quicksilver) to open in the same frame

    - by Stefan
    In Aquamacs 2.0 preview 6, when opening new files externally (e.g., via Quicksilver or via dragging files on the Aquamacs icon) they always open in new frames (i.e., new windows). I would prefer new files to be opening in the same frame, just in a new buffer (possibly with tabs turned on). I already unchecked the option "Show Buffers in new Frames", but that only seems to affect the behavior of the built-in open command (Command-o). Are there any options for this or some other way to modify this behavior?

    Read the article

  • unable to decrypt certain files with secret key pgp 6.5.8, any advice?

    - by pythonian29033
    Ok so I do some stuff for a client of ours that requires me to decrypt some of their suppliers messages, the thing is, something weird happened the other day and I can only decrypt some files with an old decryption script, but for certain files I get the error: "Message is encrypted. Cannot decrypt message. It can only be decrypted by: 2048 bits, Key ID 98627E12, Created 2000-03-02 "Other Guy "" as you can see, the key is ancient and I was still 9years old when it was created, so I have know idea who this "Other Guy" is. . .and I can't understand why I'm able to decrypt some of the supplier's files with the decryption script, but for others it fails. PS: the supplier only uses one public key, so this should work for all the files, any advice?

    Read the article

  • phpMyAdmin causes php-fpm worker to restart (502 Bad Gateway)

    - by rndbit
    I am trying to set up a test site for myself. Everything works fine except phpMyAdmin. php installation loads my test site scripts, they work fine, however trying to load phpMyAdmin i get 502 Bad Gateway error. Judging from logs (that are not too helpful) it seems that php-fpm worker is crashing each time phpmyadmin is being accessed. No clue how or why.. Does anyone have any idea? nginx log: *3 recv() failed (104: Connection reset by peer) while reading response header from upstream, And php-fpm log: [07-Jun-2012 14:19:51] WARNING: [pool www] child 32179 exited on signal 11 (SIGSEGV) after 3.217902 seconds from start [07-Jun-2012 14:19:51] NOTICE: [pool www] child 32351 started My nginx conf: user nginx; worker_processes 1; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; fastcgi_buffers 8 16k; fastcgi_buffer_size 32k; include /etc/nginx/conf.d/*.conf; server { listen 443 ssl; listen 80; server_name testsite.net www.testsite.net; ssl on; ssl_certificate /var/www/html/admin/ssl/certificate.pem; ssl_certificate_key /var/www/html/admin/ssl/privatekey.pem; ssl_session_timeout 1m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5:!kEDH; ssl_prefer_server_ciphers on; access_log off; location ~ \.php$ { root /var/www/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; } location / { root /var/www/html; index index.php; } } } php.ini is standard, with cgi.fix_pathinfo=0 php-fpm.conf: include=/etc/php-fpm.d/*.conf [global] pid = /var/run/php-fpm/php-fpm.pid error_log = /var/log/php-fpm/error.log log_level = notice php-fpm.d/www.conf: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 10 pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 10 slowlog = /var/log/php-fpm/www-slow.log php_flag[display_errors] = on php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on

    Read the article

  • unable to access usb device.

    - by Tom
    Hi everyone, I'm reading my boot logs, at /var/log trying to understand why the boot process is taking so long. I found that the system can't access many usb devices, but can't understand why. Is there a way to stop Ubuntu from trying to access them? Here are the lines: /var/log# grep -r "usb_id" . ./boot.log:usb_id[716]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/mouse1' ./boot.log:usb_id[721]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./boot.log:usb_id[725]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input7/event7' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./syslog:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[955]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[956]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/mouse3' ./daemon.log:Jan 12 21:12:05 TomsterInc usb_id[963]: unable to access '/devices/pci0000:00/0000:00:1a.0/usb3/3-1/3-1.2/3-1.2:1.0/input/input16/event16' Any help will be greatly appreciated. Thanks in advance.

    Read the article

  • Is it possible to open Adobe Illustrator files from last close?

    - by ksy
    Say you have three files open in Adobe Illustrator, but you exit out of the program. Is there a way that you can have those three files opened back up when you open up Illustrator again? Similar to how most text editor programs are set up -- so when you open up the program again all your last stayed-open files are there. Also, does anyone know what the terminology for this saving-behavior is? State-saving? Googling to find the answer is difficult within the sea of Illustrator 'saving files' questions.

    Read the article

  • Should I be worry about max number of files in a folder in *NIX filesystems?

    - by ??????
    In a social networking project we want to store user's avatars in a folder. I think in one year or two it'll reach to 140K (I've seen this issue before and it will be around this number). I want to spread files in folders. If a folder contains 1000 files then create another folder and do store files from 1001 to 2000. Is this a good approach or I'm just very cautious about the issue? (File system : EXT3)

    Read the article

  • how to edit source files and commit the changes to the new website?

    - by ajsie
    i've got ubuntu installed with lamp. im using webdav to upload/download files to/from the ubuntu web server, after i have edited the php source files in netbeans. however, i wonder what is best practice for editing source files and committing these changes to the new website. cause if we are 2-3 developers, i guess we have to use svn. but i have never used it before so i wonder how it works. should i install it and then select the /var/www (apaches webroot) as the repository folder? then when i check in, all the changes will apply immediately? could someone please explain following steps: how to download, edit the source files, upload the files and see the new changes in the website. cause i have only worked with a local apache before, and it was only me. now there will be some more programmers so i have to set up a decent, central environment for this, and have to know how netbeans, svn, webdav and apache works all together. thanks!

    Read the article

  • Time sync fails on Hyper-V VM, but succeeds when I log in as a domain user

    - by Richard Beier
    We have a Windows Server 2003 SP2 VM running on Hyper-V (Server 2008 R2 host). The VM has Hyper-V time synchronization enabled. I noticed that the time on the VM was fast by around 25 minutes. I saw the following in the event log: The time provider NtpClient is configured to acquire time from one or more time sources, however none of the sources are currently accessible. No attempt to contact a source will be made for 15 minutes. NtpClient has no source of accurate time. The time provider NtpClient cannot reach or is currently receiving invalid time data from ourdc.ourdomain.local (ntp.d|192.168.2.18:123-192.168.2.2:123). Time Provider NtpClient: No valid response has been received from domain controller ourdc.ourdomain.local after 8 attempts to contact it. This domain controller will be discarded as a time source and NtpClient will attempt to discover a new domain controller from which to synchronize. I had been logged in as a local user. (We have an old app that runs on this VM - it requires a user to be logged in at all times, and we use a non-domain user account for this.) When I logged in as a domain user, the clock almost immediately corrected itself. Running "w32tm /monitor" and "net time" as the domain user showed no errors, and indicated that our domain controller was the time source. Does anyone know what might cause this, and why logging in under a domain account fixes the problem? I'm wondering if the time will start to drift again. Thanks for your help, Richard

    Read the article

  • A single AD user can't log into a single Mac bound to the domain (DirectoryServices error). How can I resolve this?

    - by Ben Wyatt
    On our campus, we have about 60 Macs joined to our Active Directory domain. Most users have no problems logging into Macs, as long as their accounts are configured correctly. However, we have one particular user who is unable to log in to just some of the Macs. He has no problem with most of them, but there is one group of them (all built from the same image) that he can't log in to. The machine in question is running OS X 10.6.2. The relevant entries from secure.log are below, with the hostname and username redacted. Aug 16 10:32:43 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:43 hostname SecurityAgent[4411]: Will sleep 1 seconds and try again (retryCount = 4) Aug 16 10:32:44 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:44 hostname SecurityAgent[4411]: Will sleep 2 seconds and try again (retryCount = 3) Aug 16 10:32:46 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:32:46 hostname SecurityAgent[4411]: Will sleep 4 seconds and try again (retryCount = 2) Aug 16 10:33:10 hostname SecurityAgent[4411]: Could not get the user record for username from DirectoryServices. Aug 16 10:33:10 hostname SecurityAgent[4411]: Will sleep 8 seconds and try again (retryCount = 1) Aug 16 10:33:18 hostname SecurityAgent[4411]: User info context values set for username Aug 16 10:33:18 hostname SecurityAgent[4411]: unknown-user (username) login attempt PASSED for auditing Everything I've found online suggests that our use of Mobile Accounts is causing the issue. I turned that feature off, but I still can't log in as that user. id returns a record for his account, and nothing looks out of the ordinary. Has anyone here run into this before?

    Read the article

  • How to copy existing movie files on ipad to watch?

    - by Rob Van Dam
    I have an iPad and a desktop running Ubuntu 12.04 with lots of movie files of various formats (avi, mp4, m4v, etc). Is it possible to transfer those files from within linux directly to the iPad (without using iTunes to sync) over USB and then play those movies on the iPad without reformatting/resizing all of them? I've used iTunes in a Windows XP virtualbox instance with other iOS mobile devices before but I would prefer a pure linux approach if possible. (I'm creating this question so that I can share the solution I found because I failed to find a simple, satisfactory answer elsewhere).

    Read the article

  • Ubuntu 13.04 client cannot connect to Raspbian samba share

    - by envoyweb
    I have a client Ubuntu 13.04 machine trying to connect to a server running Raspbian with samba and samba-common-bin installed on the server I can see my share and when I try to login I get this error: Unable to access location: Failed to write windows share Cannot allocate memory. I have installed ntfs-3g for the usb hard drive that already auto mounts on the server so I never had to create a directory or edit fstab. Testparm on the server states the following: [global] workgroup = ENVOYWEB server string = %h server map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [homes] comment = Home Directories valid users = %S create mask = 0700 directory mask = 0700 browseable = No [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [BigDude] comment = Sharing BigDude's Files path = /media/BigDude/ valid users = @users read only = No create mask = 0755 testparm on the client which is running ubuntu is as follows [global] workgroup = ENVOYWEB server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers

    Read the article

  • Script / App to unRAR files, and only delete the archives which were sucessfully expanded.

    - by Jeremy
    I have a cron job which runs a script to unrar all files in a certain directory (/rared for argument's sake) and place the expanded files in /unrared. I would like to change this script so that it deletes the original rar archives from /rared only if they successfully extracted. This does not mean that unrar has reported that they have been fully extracted, because I have had data corruption during decompression before. Ideally (pie-in-the-sky, just to give you an idea of what I'm shooting for,) the unrar program would include this functionality, comparing an expected md5sum value with the actual md5sum value and only deleting the archive if they match. I don't mind scripting this entire process if I have to, but there must be a better way than unraring twice and comparing md5sums.

    Read the article

  • I'm running Ubuntu notebook on a USB drive with 1GB persistence. Where are my XAMPP files?

    - by CDeanMartin
    I just began running Ubuntu Notebook on my 2GB USB stick, with 1GB persistence. After I installed XAMPP, I rebooted to make sure the persistence was working. It worked! My XAMPP installation worked, and it comes up when I type localhost into the browser. The sample apps work, including the ones using MySQL. But I can't find the application files! I am used to the vast labyrinth of files in Windows Explorer, and looking through 'Files and Folders' in GNOME, there seem to only be 7 or 8 folders in the entire OS? What is really going on here? I looked through them all and where is XAMPP? When I code PHP with a WAMP stack, all the .php files go in the 'WWW folder' What is the Linux equivalent of the WWW folder?

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >