Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 798/914 | < Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >

  • Why dd finishes instantly when pipelining to cat?

    - by agsamek
    I start bash on Cygwin and type: dd if=/dev/zero | cat /dev/null It finishes instantly. When I type: dd if=/dev/zero > /dev/null it runs as expected and I can issue killall -USR1 dd to see the progress. Why does the former invocation finishes instantly? Is it the same on a Linux box? * Explanation why I asked such stupid question and possibly not so stupid question I was compressing hdd images and some were compressed incorrectly. I ended up with the following script showing the problem: while sleep 1 ; do killall -v -USR1 dd ; done & dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c Wc should write 1000000000 at the end. The problem is that it does not on my machine: bash-3.2$ dd if=/dev/zero bs=5000000 count=200 | gzip -c | gzip -cd | wc -c 13+0 records in 12+0 records out 60000000 bytes (60 MB) copied, 0.834 s, 71.9 MB/s 27+0 records in 26+0 records out 130000000 bytes (130 MB) copied, 1.822 s, 71.4 MB/s 200+0 records in 200+0 records out 1000000000 bytes (1.0 GB) copied, 13.231 s, 75.6 MB/s 1005856128 Is it a bug or am I doing something wrong once again?

    Read the article

  • VPN sharing on Mac OS X 10.5 machine

    - by Jens
    I have a rather weird problem. I want to share a VPN connection that has been established by my Mac OS X 10.5 computer with another machine in my network. This is what I did: In the /etc/hostcongig file on the main computer I added the line: IPFORWARDING=-YES- I assigned a fixed IP address to my computer (192.168.178.30), a fixed one to the other machine (192.168.178.60) and my computer's IP address as gateway on the other machine. I connected to my VPN using the internal Mac OS X VPN client (PPTP connection) I run this script: #!/bin/sh natd -same_ports -use_sockets -unregistered_only -dynamic -interface ppp0 -clamp_mss ipfw -f flush ipfw add divert natd ip from any to any via ppp0 ipfw add pass all from any to any sysctl -w net.inet.ip.forwarding=1 Source: Using (and sharing) a VPN connection on your Mac Now everthing works smootly, however speed is an issue. I get 1,8 MBit/s on my main machine and only 0,3 - 0,6 MBit/s on the other one. My question: What could possibly be wrong? Do I have to tweak MTU settings, is there any packet inspection ongoing that needs time....? Any help appreciated!

    Read the article

  • Exchange 2010 certificate errors

    - by Frederik Nielsen
    I have a problem with my newly setup Exchange environment for our hosted customers. First off, when configuring the outlook client, it gives a certificate warning although the certificate has been bought and setup. I am using a setup like this: autodiscover.CUSTOMERDOMAIN.TLD CNAME autodiscover.exchange.COMPANYDOMAIN.TLD (Companydomain is our company that hosts the exchange servers, customerdomain being the customers domain) Shouldn't that work? I know that Microsoft does something like that for Office365, but I really don't think they buy a certificate for every customer.. So I guess some redirection should be setup somehow - any guidance? Next thing: When we accept that error, and move on to actually starting Outlook, it states that the certificate is not valid for the RPC proxy server exchange.COMPANYDOMAIN.TLD - this domain is not right, as that domain is not included in the certificate. I would instead like this domain to be mail.exchange.COMPANYDOMAIN.TLD I tried to run this script setting both internal and external URL's to be the same, with no luck. Any guidance on this one? I am running Exchange 2010 SP2, with CAS, HT and MBX split up on 3 different servers.

    Read the article

  • Track kids browsing history even when they know how to clear it manually

    - by Darren Newton
    I have a colleague with two teenage boys (yes, cue cliche's about 'I have this friend see...') He's currently having issues with them browsing pr0n and wants to do a little spying on their browsing (I'm staying clear of the philosophies/ethics on this.) The kids are savvy enough to clear their browsing history when they're done. As I'm his goto for IT he has asked me if there is a way to keep a hold of the browsing history. The family uses Macs, and the kids surf with Safari. I know that browsing history is kept here ~/Library/Safari/History.plist. I figure there should be a way to write either an AppleScript or other script (Python/Ruby/Bash) that can backup this file to a different location (/opt/local/history, etc.) Since the kids know to clear their history when they're done should the file be periodically backed up with something similar to a cron job or something like Hazel? While that could work it seems like it would create a ton of little incremental backups. Or is it possible to 'watch' ~/Library/Safari/History.plist and incrementally add changes to a backup file (saving a diff so to speak) but not lose any data? Any ideas/solutions appreciated. UPDATE/EDIT: Got the word from concerned dad that the oldest uses Firefox on a different PC, so the OpenDNS solution (preferably at the router level) is the best answer so far as it would capture usage for the whole house.

    Read the article

  • Server slowdown

    - by Clinton Bosch
    I have a GWT application running on Tomcat on a cloud linux(Ubuntu) server, recently I released a new version of the application and suddenly my server response times have gone from 500ms average to 15s average. I have run every monitoring tool I know. iostat says my disks are 0.03% utilised mysqltuner.pl says I am OK other see below top says my processor is 99% idle and load average: 0.20, 0.31, 0.33 memory usage is 50% (-/+ buffers/cache: 3997 3974) mysqltuner output [OK] Logged in using credentials from debian maintenance account. -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.63-0ubuntu0.10.04.1-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 370M (Tables: 52) [--] Data in InnoDB tables: 697M (Tables: 1749) [!!] Total fragmented tables: 1754 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 19h 25m 41s (1M q [28.122 qps], 1K conn, TX: 2B, RX: 1B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 1.0G global + 2.7M per thread (500 max threads) [OK] Maximum possible memory usage: 2.4G (30% of installed RAM) [OK] Slow queries: 0% (1/1M) [OK] Highest usage of available connections: 34% (173/500) [OK] Key buffer size / total MyISAM indexes: 16.0M/279.0K [OK] Key buffer hit rate: 99.9% (50K cached / 40 reads) [OK] Query cache efficiency: 61.4% (844K cached / 1M selects) [!!] Query cache prunes per day: 553779 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 34K sorts) [OK] Temporary tables created on disk: 4% (4K on disk / 102K total) [OK] Thread cache hit rate: 84% (185 created / 1K connections) [!!] Table cache hit rate: 0% (256 open / 27K opened) [OK] Open file limit used: 0% (20/2K) [OK] Table locks acquired immediately: 100% (692K immediate / 692K locks) [OK] InnoDB data size / buffer pool: 697.2M/1.0G -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) table_cache (> 256)

    Read the article

  • DHCPD (Slackware) - Disabling auto-generation of gateway as DNS server

    - by Dogbert
    Good day, I am using a Linux workstation on Slackware 13.37. One "problem" I have had to deal with ever since 11.0 is the following: DNS servers are queried and determined at startup by DHCP daemon (DHCPD) This is invoked at startup by a script located at /etc/rc.d/rc.dhcpd My DNS servers for my ISP are resolved correctly, and are stored in a list located at /etc/resolv.conf However, the one annoying problem is that my gateway IP (ie: 192.168.1.1) is always automatically put at the top of the list in resolv.conf, meaning I have to always wait for a timeout before a valid DNS server is used to resolve an address (ie: timeout on 192.168.1.1 because it is not actually a DNS server, then DHCP uses the next server in the list). I could lower my DNS resolution timeout so the gateway query times out quicker, but that's not what I want, as I don't want to degrade the abilities of legitimate DNS servers. What I would like to do is change how DHCPD operates so that it does NOT put my gateway IP address at the beginning of this list. I've searched via "man dhcpd", etc, and haven't found the exact answer yet. Any help on this issue is appreciated. Thank you all in advance for your time and assistance.

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Pushing Windows Store apps silently

    - by seagull
    As part of a requirement I have been issued, I seek the ability to push apps from the Windows Store (.appx) to remote users. The framework surrounding the data transmission is sorted; What I need to know is if there is a script that can be sent out along with a URI or the package proper to facilitate installation on the user-side of the Windows Store app. I am well aware that numerous guides exist from Microsoft on pushing out Line-of-Business (LOB) (AKA Enterprise) applications -- that is, apps that have been developed in-house and are not for the consumption of the Windows store. This is inappropriate for my requirement, however; the customer wants their clients to receive apps that appear currently on the Windows Store, and for them to be installed in a silent manner. I've seen this guide; http://blogs.technet.com/b/keithmayer/archive/2013/02/25/step-by-step-deploying-windows-8-apps-with-system-center-2012-service-pack-1.aspx, which details doing exactly this, but it is only applicable to machines that are administered by a system running Windows Server 2012 R2 with the 'System Center 2012' bundle installed. The systems I target are considerably more decentralised than this, making this guide inappropriate. I have a hunch that Microsoft have deliberately designed the Windows Store to be this way, but I figured I ought to ask around before I resign myself to the requirement. Much obliged

    Read the article

  • apache virtualhost: Auto subdomain with exception

    - by Ineentho
    I've been searching for a way to automatically redirect domains to a specific folder, and fond a good answer here on serverfault: Apache2 VirtualHost auto subdomain. (The accepted answer) So far everything works good, however now I need to add an exception to this. The result I want is this: http://localhost/ --> E:/websites/ http://specialDomain2/ --> E:/websites/ http://normal1.com/ --> E:/websites/normal1.com/ http://normalDomain.com/ --> E:/websites/normal2.com/ I get the expceted result for the two last domains, but the localhost doesn't work. I copied the script from the question aboved, and tried to add something like <VirtualHost *:80> RewriteEngine On RewriteMap lowercase int:tolower # if already rewitten and we have the right path, stop right here RewriteRule ^(E:/websites/[^/]+/.*)$ $1 [L] RewriteRule ^localhost/(.*)$ E:/websites/$1 [L] # <-- Added this row RewriteRule ^(.+) ${lowercase:%{SERVER_NAME}}$1 [C] RewriteRule ^(www\.)?([^/]+)/(.*)$ E:/websites/$2/$3 [L,E=VHOST_ROOT:E:/websites/$2/] </VirtualHost> I thought this would make sense, since I would translate this to if URL = localhost/* Do nothing (because of the [L] flag), and use the default document root specified earlier else continue What's wrong with this? Thanks for any help!

    Read the article

  • Persistent PuTTY sessions for multiple windows

    - by Tgr
    I'm working in various Linux environments through PuTTY connections which break from time to time. I'm looking for a solution to make the PuTTY windows persist (e.g. if I was editing a file, then after reconnecting I should be in the same editor with the same file open at the same place), with the following requirements: it shouldn't require any manual setup at the beginning of the session or after reconnection (I don't want to type in screen or anything like that) I have several windows open to the same machine with the same user, which tend to disconnect at the same time the number/role of windows is not constant (it's not like I have an mc window, a mysql window and a "script runner" window; sometimes I use one window for search or for SVN commands, other times I need several at the same time) sometimes I need to change the properties of the windows for a task (large window for grepping/editing, small windows because I need to see two of them at the same time, red background because I am modifying the live database in MySQL etc), so I need to get the same console back in the same window after a reconnect Is there a way to achieve this? I suppose I should use screen or something equivalent, but how does it know which window I am reconnecting from? Is there some way to pass a unique window identifier to the shell from PuTTY?

    Read the article

  • How do you avoid that server documentation gets out of sync with the actual setup?

    - by Frerich Raabe
    I'm a hobbyist maintaining a small FreeBSD server serving mail via IMAP - it's an exercise in server administration. The setup does have reasonably good documentation (in AsciiDoc format) which recently allowed another person to recreate the entire setup from scratch in less than 30 minutes. However, I noticed that after the initial setup, it easily happens that small changes done to the system (say: inetd gets disabbled, my IMAP server listens on an additional port for ManageSieve connections, a new router is added to the exim configuration) don't end up in the documentation immediately (if at all). My idea was to avoid this problem by (partially?) generating the documentation out of the configuration files and the comments therein - one way to implement this may be to put /etc and /usr/local/etc into some source code management system (say - git) and then run a script which regenerates the documentation on every commit. However, I'm not sure whether that would be overkill and/or too difficult to get right (after all, I don't want complete copies of the source files in my documentation but rather just the diffs). How do other people avoid that the server documentation gets outdated - is there a good way to keep them in sync automatically, or do you just have the discipline to update the documentation the same time you modify the system?

    Read the article

  • Howto setup neocomplcache?

    - by eddy
    I just started using vim and saw a cool plugin: [neocomplcache].(http://www.vim.org/scripts/script.php?script_id=2620) My Problem is, that I can't get it to work properly. After installing, I took the example config from the help files of neocomplcache and added the lines to my .vimrc At first I wanted to create a simple latex file (there are snippets for tex). After typing "begi" there appears a menu, I can choose between the snippets with TAB or <C-n>. But how do I get them to expand? <C-k> does not work, but I don't understand why. ======== .vimrc: ======== .... " Plugin key-mappings. imap <C-k> <Plug>(neocomplcache_snippets_expand) smap <C-k> <Plug>(neocomplcache_snippets_expand) inoremap <expr><C-g> neocomplcache#undo_completion() inoremap <expr><C-l> neocomplcache#complete_common_string() " Recommended key-mappings. " <CR>: close popup and save indent. inoremap <expr><CR> neocomplcache#smart_close_popup() ."\<CR>" " <TAB>: completion. inoremap <expr><TAB> pumvisible() ? "\<C-n>" : "\<TAB>" " <C-h>, <BS>: close popup and delete backword char. inoremap <expr><C-h> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><BS> neocomplcache#smart_close_popup()."\<C-h>" inoremap <expr><C-y> neocomplcache#close_popup() inoremap <expr><C-e> neocomplcache#cancel_popup() ...

    Read the article

  • Windows Small Business System 2003. SQL timeout in Server Performance Report

    - by tetranz
    I'm the volunteer IT admin at a small school. We have SBS 2003 with about ten desktops. The server performance report is emailed to me daily. It is setup with a wizard in the Monitoring and Performance part of the "Server Management" console. It often fails with a "The page cannot be displayed" error. The event log shows Event Type: Error Event Source: ServerStatusReports Event Category: None Event ID: 1 Date: 1/16/2011 Time: 6:03:14 AM User: N/A Computer: ALPHA Description: Server Status Report: URL: http://localhost/monitoring/perf.aspx?reportMode=1&allHours=1 Error Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. Stack Trace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, TdsParserState state) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.ReadNetlib(Int32 bytesExpected) [plus lots more stack trace] This has been happening for years :) I've never really solved it. It seems to be related to WSUS. When it happens, I run the Update Services "Server Cleanup Wizard". That takes a long time to run. If I haven't run it for a while it can take 10 hours. I also run the WsusDBMaintenance.sql script (from TechNet I think) which reindexes the database etc. Those two things seem to get it working again for a while. Recently the "while" has become a couple of weeks. My searching online has revealed lots of people having this problem but no real solution. Does anyone have any good ideas about this? I have to wonder if something in the WSUS SQL schema is not indexed properly. The time that the server cleanup wizard takes seems ridiculous. Thanks

    Read the article

  • SamFS performance problem on file creation

    - by Gregor Longariva
    I have two samfs filesystems (samfs1 and samfs2), both on the same 6130, both with the same config/watermarks/timeouts etc. creating a file on samfs2 works as it should, on samfs1 not. A little simple script shows up, that every while and then the file creation needs between 11 and 28 seconds: stan 12:32 [scratch]# while ( 1 ) while? echo - while? time echo test file while? time mv file file2 while? echo + while? sleep 1 while? end 0.00u 0.00s 0:00.01 0.0% 0.00u 0.00s 0:00.00 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.03 0.0% + 0.00u 0.00s 0:23.71 0.0% 0.00u 0.00s 0:00.14 0.0% + 0.00u 0.00s 0:00.18 0.0% 0.00u 0.00s 0:00.13 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.04 0.0% + 0.00u 0.00s 0:00.04 0.0% 0.00u 0.00s 0:00.05 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.01 0.0% + 0.00u 0.00s 0:26.05 0.0% 0.00u 0.00s 0:00.50 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.06 0.0% + 0.00u 0.00s 0:00.00 0.0% 0.00u 0.00s 0:00.12 0.0% + Any idea where the problem could be?

    Read the article

  • WAMP Installation - Multiple PHP Version Issues

    - by Pete171
    I have installed WAMP because I am attempting to modify an an application which uses Zend Optimizer (I cannot use Z.O. with PHP5.3+, which is why I decided to install WAMP). I downloaded the latest version - Wamp Server 2.1 - which comes bundled with PHP5.3.5. I then downloaded a PHP version that would be compatible with Z.O. - 5.2.9. - and a compatible Apache version, 2.0.63. My problem: PHP scripts run fine, but anything with MySQL does not work. Running the testmysql.php script returns the fatal error: Fatal error: Call to undefined function mysql_connect() in C:\wamp\www\testmysql.php on line 2". I have looked in the PHP INI files inside both PHP versions, and I'm fairly sure the relevant information is there. At least, there are parts inside that mention MySQL! Perhaps somebody could clarify exactly what information should be present? Also, when visiting a page that called phpinfo(), I noticed that the 'Loaded Configuration File' was pointing to C:\wamp\bin\php\php5.3.5\php.ini, even though I have enabled the older PHP version. I've stopped and started Apache, too, and that hasn't made a difference. Is anybody able to offer any assistance? Anything at all would be great; I'm not very good at messing around with Apache/

    Read the article

  • Running batch file through a service.

    - by wallz
    I'm trying to schedule a batch file to run through a third party application, however the output file doesn't get created in the directory. If I run the .BAT file from the command line, it works and the file gets created. Also using the Windows Schedule will also succeed. Basically, the 3rd party software will schedule the .BAT file and it shows success within the 3rd party user interface. The difference between running from the command prompt and the software, is that the software will use its Windows service to launch the batch. The 3rd party software will show success since it was able to successfully call the .BAT file to run, however it has no control of the other EXE's that's being called within the script. I'm able to run a simple .BAT file in the 3rd party software, for example a copy command. The .BAT I'm having problems with calls a compiled EXE which launches Excel to create a file to a location. The .bat file calls something.exe, which then calls Excel.exe: C:\something.exe -o D:\filename.xlsm C:\filename.xlsm refresh_pivot Do you think it's a permissions issue? I used Process Monitor to verify any Access Denied errors but everything seems to be working according to the trace. It worked on a non-64-bit OS, I'm currently using Win2008 64-bit.

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • iTunes copy just metadata (song and album ratings, playlists) from iPod

    - by Jared Updike
    I have an iPod touch that I synched with my Windows computer (iTunes 9.0 I think) until my harddrive failed and I lost my entire library. I rebuilt the library (songs) from a year old backup (and various other source for songs) but my playlists and ratings are of course a year old. My iPod itself has most of the playlists and ratings I care most about (favorite songs and albums, rated 4 and 5, for example). I have a catch 22 situation where I feel nervous that I haven't backed up my iPod in around 4 months (when my drive failed) so I'd like to back it up as soon as possible... but if I back it up I have to clear all the songs and playlists and copy them back, which I can't really do since I need to rebuild my playlists on my computer first (using the data only available on my iPod!) The question: is there a better way to READ the information off my iPod than doing it manually, song by song and album by album and playlist by playlist (XML, text dump, database, spreadsheet, anything). In other words, mostly I want the information (metadata like ratings and playlists, not songs) copied off the iPod so I can more quickly get my iTunes library ratings and playlists re-built (manually) so I can finally wipe the music and back up my apps, etc. Then I'd like to copy the music back immediately. The part I'd like to avoid is manually navigating everything on my iPod to read through all the playlists and ratings (50 GB, 6,000+ songs) as I re-enter all of that data by hand. I've done a few dozen albums and it's pretty time consuming having to tap around on the iPod. Reading from a spreadsheet (for example, or XML which I could write a script to get into spreadsheet form) would probably help tremendously, plus then I'd have a backup of that information somewhere besides just my iPod.

    Read the article

  • What is a suitable simple, open web server for Windows?

    - by alficles
    I'm looking for a dead simple web server for Windows. Load will not be high as it will be primarily serving binaries for a WPKG update service. It needs to serve the entire contents of a single folder over HTTP on a configurable (high) port. No CGI or other scripting is required, but it might be nice for future features. I started with Mongoose, since it doesn't even have an installation requirement (a very nice perk), but it fails to start when run as a service. (Technically, it acts as it's own installer.) I've investigated LighTPD as well, but it appears to be minimally (at best) tested on Windows. And naturally, I'm looking for something free. As in beer is good, but speech is better, as always. Edit: I didn't mention this initially, but non-tech people will be doing the install. They'll have whatever script I write for the install, but the goal is a simple system that is easy to troubleshoot. (I almost worded this question "What is the best...", but Serverfault rightly observed that that is a subjective question. And it's really not an optimization problem, any suitable solution will work. I just can't seem to find one for Windows.)

    Read the article

  • Why not block ICMP?

    - by Agvorth
    I think I almost have my iptables setup complete on my CentOS 5.3 system. Here is my script... # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP For context, this machine is a Virtual Private Server Web app host. In a previous question, Lee B said that I should "lock down ICMP a bit more." Why not just block it altogether? What would happen if I did that (what bad thing would happen)? If I need to not block ICMP, how could I go about locking it down more?

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • Measuring performance indicators on a cluster

    - by Aditya Singh
    My architecture is based on Amazon. A ELB load balancer balances POST requests among m1.large instances. Every instance has a nginx server on port 80 which distributes the requests to 4 python-tornado servers on backend which handle the request. These tornado servers are taking about 5 - 10ms to respond to one request but this is the internal compute time of every request. I want to put this thing on test and i want to measure the response time from ELB to upstream and back and how does it vary when the QPS throughput is increased and plot a graph of Time vs. QPS vs. Latency and other factors like CPU and Memory. Is there a software to do that or should i log everything somewhere with latency checks and then analyze the whole log to get the stuff out. I would also need to write a self-monitor which keeps checking the whole response time. Is it possible to do it with a script from within the server. If so, will it be accurate ?

    Read the article

  • Remove an apache alias subdirectory

    - by Hippyjim
    I'm using Apache 2 on Ubuntu 12.04. I added an alias for a subdirectory, to point to gitweb. I realised I should probably make it accessible only on https - so I removed the alias and restarted Apache. I can still navigate to http://xyz/gitweb - even with no alias in any of my config files. How do I remove it? EDIT The config file looked like this before: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> Alias /gitweb/ /usr/share/gitweb/ <Directory /usr/share/gitweb/> Options ExecCGI +FollowSymLinks +SymLinksIfOwnerMatch AllowOverride All order allow,deny Allow from all AddHandler cgi-script cgiDirectory Index gitweb.cgi </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> And this after: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/administrator/webroot <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/administrator/webroot/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost>

    Read the article

< Previous Page | 794 795 796 797 798 799 800 801 802 803 804 805  | Next Page >