Search Results

Search found 22298 results on 892 pages for 'default'.

Page 688/892 | < Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >

  • How to circle out something in a picture?

    - by T...
    What is the easiest way to circle out something in a picture, like this example This is accomplished in Gimp: Here are the steps necesary to draw an empty ellipse without clearing the contents of the image below it. 1 - Layer New layer 2 - Make the layer to be the same size as the image and layer fill type to transparency. This should be already selected by default. 3 - On the toolbox select the ellipse select tool and make an ellipse 4 - Use the bucket fill tool to paint the ellipse with your desired color. 5 - Right click on it and go to Select Shrink... 6 - Type in how many pixels you want the border to be and click ok. 7 - Go to the menu and click Edit Clear. I feel it is very indirect, in the sense that first fill out the region enclosed by the ellipse, and then shrink the region to the boundary. I wonder if there is a quicker and more direct way to circle out something, such as by directly drawing the boundary? My OS is Ubuntu. What I was asking may be done outside of gimp, but must be by some software under Ubuntu. Thanks!

    Read the article

  • Can't ssh from CentOS 6.5 to SUSE LINUX 10.1

    - by Pavel Tankov
    We have a quite old installation of SUSE LINUX 10.1 (i586) in the office. The problem shortly: I can successfully ssh to it from machines in the same LAN (192.168.1.0) and not from others (that are in 10.23.0.0). The SuSE has SSH server openssh-4.2p1-18.12. I have ruled out the firewall and hosts.allow and hosts.deny files. When my ssh login attempt fails, here is what the logs say: on the client: $ ssh -vvv 192.168.1.5 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.1.5 [192.168.1.5] port 22. debug1: Connection established. debug1: identity file /home/nbuild/.ssh/identity type -1 debug1: identity file /home/nbuild/.ssh/identity-cert type -1 debug1: identity file /home/nbuild/.ssh/id_rsa type -1 debug1: identity file /home/nbuild/.ssh/id_rsa-cert type -1 debug1: identity file /home/nbuild/.ssh/id_dsa type -1 debug1: identity file /home/nbuild/.ssh/id_dsa-cert type -1 on the server: Aug 21 16:34:25 serverhost sshd[20736]: debug3: fd 4 is not O_NONBLOCK Aug 21 16:34:25 serverhost sshd[20736]: debug1: Forked child 20739. Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: entering fd = 7 config len 403 Aug 21 16:34:25 serverhost sshd[20736]: debug3: ssh_msg_send: type 0 Aug 21 16:34:25 serverhost sshd[20736]: debug3: send_rexec_state: done Aug 21 16:34:25 serverhost sshd[20739]: debug1: rexec start in 4 out 4 newsock 4 pipe 6 sock 7 Aug 21 16:34:25 serverhost sshd[20739]: debug1: inetd sockets after dupping: 3, 3 Aug 21 16:34:25 serverhost sshd[20739]: debug3: Normalising mapped IPv4 in IPv6 address Aug 21 16:34:25 serverhost sshd[20739]: Connection from 10.23.1.11 port 44340 The above log on the server is when I enable DEBUG3 log level. However, with the default log level (INFO), the only thing the server logs is this: Aug 21 16:38:32 serverhost sshd[20749]: Did not receive identification string from 10.23.1.11 Any hints? I feel I've tried everything already.

    Read the article

  • User receives group membership error to terminal server even though has rights

    - by BlueToast
    http://www.hlrse.net/Qwerty/TSLoginMembership.png To log on to this remote computer, you must be granted the Allow log on through Terminal Services right. By default, members of the Remote Desktop Users group have this right. If you are not a member of the Remote Desktop Users group or another group that has this right, or if the Remote Desktop User group does not have this right, you must be granted this right manually. Only as of today a particular user began receiving this message for a second terminal server they use; otherwise, they have never had any problems authenticating into this server. We have no restrictions on simultaneous and multiple logins. On each terminal server, we have a group and security group like "_Users" locally in the Builtin\Remote Desktop Users group. For this particular user, on this particular terminal server we have locally given him Administrator, Remote Desktop Users, and Users membership; in AD we have given him DOMAIN\Administrator, Builtin\Remote Desktop Users, DOMAIN\_Users. It still gives us that error message. We gave him membership to another terminal server (random) by simply making him member of another DOMAIN\_Users group -- successfully able to login to that random terminal server. So, from scratch we created an AD account 'dummy' (username) with only Domain Users membership. Tried to login to this particular server, no success. So I added 'dummy' to DOMAIN\_Users group, and then was successfully able to login. Other users from this user's department are able to login to this particular server just fine as well. We checked the Security logs on this particular server, and while it is logging everything, the only thing it appears to not log are these failed login attempts from this particular user who receives this error message. We have tried rebooting the server, and the user is still receiving that error message.

    Read the article

  • Curious enigma of a network cable / connection / quality

    - by Foo Bar
    So, the situation is like this: I'm renting an apartment in a large house and I'm sharing internet with the landlord who lives downstairs. The internet is (in my best guess) optical 20/20Mbit. I don't know how it's all wired in his flat (haven't been there / seen it). Anyway, in my flat comes a cable which seems to be connected directly to the optic to ethernet router (and the password is the default one, so I have access, he he). There was a switch connected to that and to wires that go around the flat, and the wiring is terrible. It's even mixing phone and ethernet, and from what I see some cables are even interconnected!? Anyways, this cable that comes to my flat is very short. I can barely connect my computer on it, but if I do, I seem to get decent speed / performance. Not great, but decent. If, however, I connect switch to it (tried 2 different switches and a wifi switch) it's all blinking but I can't even connect to 192.168.1.1 (the router). DHCP fails, ping is losing 80-100% of replies. So I connected this cable directly to the other cable which goes to my work room, with a connector that has two female jacks and no electronics. Now when I connect my computer in my room, again, the performance is decent. When I connect WRT54GL (with tomato, DHCP disabled) to it and I plug a cable in this WRT and to my computer... the performance is gone. Download seems okay on Speedtest, but upload is .2Mbps and it's connecting forever. So what kind cable troll am I having here? Any ideas?

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • OpenLDAP 2.4.23 - Debian 6.0 - Import schema - Insufficient access (50)

    - by Yosifov
    Good day to everybody. I'm trying to add a new schema inside OpenLDAP. But getting an error: ldap_add: Insufficient access (50) root@ldap:/# ldapadd -c -x -D cn=admin,dc=domain,dc=com -W -f /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif root@ldap:/# cat /tmp/test.d/cn\=config/cn\=schema/cn\=\{5\}microsoft.ldif dn: cn=microsoft,cn=schema,cn=config objectClass: olcSchemaConfig cn: microsoft olcAttributeTypes: {0}( 1.2.840.113556.1.4.302 NAME 'sAMAccountType' DESC 'Fss ssully qualified name of distinguished Java class or interface' SYNTAX 1.3.6. 1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {1}( 1.2.840.113556.1.4.146 NAME 'objectSid' DESC 'Fssssull y qualified name of distinguished Java class or interfaced' SYNTAX 1.3.6.1.4. 1.1466.115.121.1.40 SINGLE-VALUE ) olcAttributeTypes: {2}( 1.2.840.113556.1.4.221 NAME 'sAMAccountName' DESC 'Fds sssully qualified name of distinguished Java class or interfaced' SYNTAX 1.3. 6.1.4.1.1466.115.121.1.15 SINGLE-VALUE ) olcAttributeTypes: {3}( 1.2.840.113556.1.4.1412 NAME 'primaryGroupToken' SYNTA X 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcAttributeTypes: {4}( 1.2.840.113556.1.2.102 NAME 'memberOf' SYNTAX 1.3.6.1. 4.1.1466.115.121.1.12 SINGLE-VALUE ) olcAttributeTypes: {5}( 1.2.840.113556.1.4.98 NAME 'primaryGroupID' SYNTAX 1.3 .6.1.4.1.1466.115.121.1.27 SINGLE-VALUE ) olcObjectClasses: {0}( 1.2.840.113556.1.5.6 NAME 'securityPrincipal' DESC 'Cso ntainer for a Java object' SUP top AUXILIARY MUST ( objectSid $ sAMAccountNam e ) MAY ( primaryGroupToken $ memberOf $ primaryGroupID ) ) I also tried to add the schema by phpldapadmin, but gain the same error. I'm using the admin user which is specified by default from the begging of the slpad installation. How may I add permissions to this user ? Best wishes

    Read the article

  • INACCESSIBLE_BOOT_DEVICE after installing Linux on same drive

    - by kdgregory
    History: My PC was configured with two drives: an 80G on IDE 0 Primary that was running Win2K, and a 320G on IDE 0 Secondary that was running Linux (Ubuntu). I decided to pull the 80Gb drive out of the system, so dd'd the entire 80 G drive (/dev/sda) onto the 320 (/dev/sdb) -- this included the MBR and partition table. Then I pulled the drive, plugged the 320 into IDE 0 Primary, and rebooted. The Windows partition worked at this point. Then I installed Ubuntu into the remaining space on the 320. It works. However, when I try to boot into Windows, I get a BSOD with the following message: *** STOP: 0x0000007B (0x89055030,0xC000014F,0x00000000,0x00000000) INACCESSILE_BOOT_DEVICE Before the BSOD I see the Win2K splash screen, and it claims to be "starting windows" for a couple of seconds -- so it appears that the first stage boot loader is working as expected. Ditto when I try booting in Safe Mode. After reading the Microsoft KB article, I booted into the recovery console and tried running chkdsk /r. It refused to run, claiming that the drive was corrupted (sorry, didn't write down the exact error message). However, I can mount the drive from Linux, and access all files. And for what it's worth, I can scan the drive using the Linux "Disk Utility" (this is Ubuntu, the menus don't show real program names), it claims the drive to be clean. The KB article mentioned that boot.ini could be the problem, so here it is: timeout=10 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows 2000 Professional" /fastdetect Any pointers on what to do next?

    Read the article

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

  • Cron job checking for changes in Git repository

    - by HNygard
    We have just moved our server configs to a Git repository. Therefore there should not be any changes in any of the repository folders. I was thinking about how I could set up a cron job to check for any uncommited changes. How could a cron job be set up to check for changes in a Git repository? Greping the output of the git status command might just do it. Grep and cron jobs are not my strong side. Here are some sample outputs from git status: Standing the folder containing the git repository (e.g. /path/gitrepo/) with changed files: $ git status # On branch master # Changes not staged for commit: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: apache2/sites-enabled/000-default # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # apache2/conf.d/test no changes added to commit (use "git add" and/or "git commit -a") Standing in the folder when there is no changes: $ git status # On branch master nothing to commit (working directory clean) Update: Synced up with origin is not important. There should be no local changes. Local files that must be in place go into the .gitignore file. In addition to the server configs there are also git repos for content (static web sites, web apps, wordpress, etc). None of the repositories should have local changes. We might use Puppet in the long run since its being used for development of one of the web apps.

    Read the article

  • I (stupidly) converted a TrueCrypt encrypted disk to GPT in Disk Management: now TrueCrypt won't mount it

    - by asilentfire
    Backstory: After moving a Macrium Reflect disk image from my TrueCrypt external drive (with whole disk encryption) onto a unencrypted drive and using Windows PE with Macrium Reflect to restore my internal disk to the recovery image on the external unencrypted drive, my Windows 8 failed to boot. I then went back and also recovered the System Partition (looking now, it is currently EFI), but I still couldn't boot into my backup.. I was in a hurry to get online for something so I just did a clean install of Windows 8, without the backup.. After I installed Windows 8, I went into Disk Management out of curiosity to see if there were other partitions with Windows 8 that Macruim might have missed, and there is (by default) a Recovery Partition of 100MB. My memory of this is hazy, as I was trying to get up and running for an exam at 4 AM: Something in Disk Management prompted me to convert my encrypted external drive to GPT.. I have no idea why I did this, but I went ahead and allowed it to convert my TrueCrypt drive to GPT. Now, I can't mount the drive in TrueCrypt.. Disk Management sees it as Disk 1, Basic, and Unallocated. I tried converting it back to MBR with Disk Management, but no dice with TrueCrypt :( If I try to mount the disk in TrueCrypt I get the message: Incorrect password or not a TrueCrypt volume I should never have messed with a Truecrypt drive in Disk Management, but I did. I have important college work in that drive, and fear I have lost it forever. PLEASE HELP

    Read the article

  • Hiding my location to websites with region-specific languages/content

    - by Tudor
    I just went to download Microsoft Secority Essentials and it enraged me as it redirected me to a site in my home language and not the default English. If I go to America, I don't want them to speak Swahili. It reminded me of all the other websites who try to do the same. I don't want my content in greek when I'm on vacation! I for one simply can't work on a computer unless the language is English (or unless there's a VERY good reason to change the language). Location aware content is only good for download mirrors, and even then I would rather pick from a list of countries myself. (or if you can't speak anything but your own language) I know websites get your location from your IP and ISP, but is there any way you can inhibit this behaviour on a browser level? Is there any Chrome/Firefox extension for it? Do I really have no choice but to hide my IP? There's all sorts of services that claim they're hiding your IP for free so that people can't log and trace your steps through the internet, but they're probably logging it themselves and making money off it. Why else would they be free? I've found that Firefox has an Option that says "Choose your preferred language when displaying pages". Haven't found anything for Chrome.

    Read the article

  • if I define `my_domain`, postfix does not expand mail aliases

    - by Norky
    I have postfix v2.6.6 running on CentOS 6.3, hostname priest.ocsl.local (private, internal domain) with a number of aliases supportpeople: [email protected], [email protected], [email protected] requests: "|/opt/rt4/bin/rt-mailgate --queue 'general' --action correspond --url http://localhost/", supportpeople help: "|/opt/rt4/bin/rt-mailgate --queue 'help' --action correspond --url http://localhost/", supportpeople If I leave postfix with its default configuration, then the aliases are resolved correctly/as I expect, so that incoming mail to, say, [email protected] will be piped through the rt-mailgate mailgate command and also be delivered (via the mail server for ocsl.co.uk (a publicly resolvable domain)) to [email protected], user2, etc. The problem comes when I define mydomain = ocsl.co.uk in /etc/postfix/main.cf (with the intention that outgoing mail come from, for example, [email protected]). When I do this, postfix continues to run the piped command correctly, however it no longer expands the nested aliases as I expect: instead of trying to deliver to [email protected], user2 etc, it tries to send to [email protected], which does not exist on the upstream mail server and generates NDRs. postconf -n for the non-working configuration follows (the working configuration differs only by the "mydomain" line. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ocsl.co.uk newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 We did have things working as we expected/wanted previously on an older system running Sendmail.

    Read the article

  • SSH attcack CentOS Amazon EC2

    - by user37143
    Hi, I run a few Rightscale CentOS AMI based instances on Amazon EC2. Two months back I found that our SSHD security is compromised( I had added host.allow and host.deny for ssh). So I created new instances and done an IP based ssh that allows only our IPs through AWS Firewall(ec2-authorize) and chnaged the ssh 22 default port to some other port but two days back I found I was not able to login to the server and when I tried on 22 port the ssh got connected and I found that sshd_conf was changed and when I tried to edit sshd_config I found root had no write permission on the file. So I tried a chmod and it said access denied for 'root' user. This is very strange. I checked secure log and history and found nothing informative. I have PHP, Ruby On Rails, Java, Wordpress apps running on these server. This time I did a chkrootkit scan and found nothing. I renamed the /etc/ssh folder and reinstalled openssh through yum. I had faced this on 3 instances on CentOS(5.2, 5.4) I have instances on Debian as well those working fine. Is this a CentOS/Rightscale issue. Guys, what security measures I should take to prevent this. Please support me this is very critical. Thanks

    Read the article

  • Chrome Lockups Windows 7 64-bit

    - by Mike Chess
    I'm running Google Chrome (6.0.427.0 dev) on Windows 7 Home Premium 64-bit (AMD Phenom 3.00 GHz, 8 GB RAM). The computer lockups hard after running Chrome for about five minutes. The lockup happens whether Chrome is actually being used to browse web sites or it is just idling. No programs can be started or interacted with when this happens. The computer must be power-cycled to recover. The lockup happens regardless of which web sites are being browsed. The system event logs do not show any events around the time when the lockup transpired. All other applications run just fine on this system. In fact, Chrome ran without issue for several months on this system (the system was brand new 03-2010). I also run the same version of Chrome on other computers (Windows XP SP3) without issue. I've come to really like Chrome and use it as my default browser whenever possible. What could be causing Chrome to cause the system to lockup as it does? Does Chrome have any logs that aren't part of the Windows event log? Does Chrome have a debug command line switch that might reveal more about what happens?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Uploads fail with shorewall enabled

    - by JamesArmes
    I have an Ubuntu 8.04 server with shorewall 4.0.6 installed. When I try to upload files using FTP, SCP, or cURL the file upload stalls almost immediatly and eventually times out. If I turn off shorewall then the uploads work fine. I don't have any rules that specifically allow FTP and I'm not too concerned with it, but I do need to be able to upload via 22 (SCP) and 80 & 443 (cURL). This is what my rules look like: COMMENT Allow Server to respond to any web (80) and SSL (443) requests ACCEPT net $FW tcp 80 ACCEPT $FW net tcp 80 ACCEPT net $FW tcp 443 ACCEPT $FW net tcp 443 COMMENT Allow Server to respond to SNMPD (161) requests ACCEPT net $FW udp 161 COMMENT Allow Server to respond to MySQL (3306) requests (for MySQL Graphing) ACCEPT net $FW tcp 3306 COMMENT Allow Server to respond to any SSH connection attempts, and to SSH out. SSH/ACCEPT net $FW SSH/ACCEPT $FW net COMMENT Allow Server to make DNS Requests out. DNS/ACCEPT $FW net COMMENT Default "close" anything else. Ping/REJECT net $FW ACCEPT $FW net icmp #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE I expected the top four ACCEPT lines to allow inbound and outbound traffic over 80 and 443 and I expected the two SSH/ACCEPT lines to allow inbound and outbound trffic over 22, including SCP. Any help is greatly appreciated. /etc/shorewall/policy contains the following (all lines above are commented out): # # Allow all connection requests from teh firewall to the internet # $FW net ACCEPT # # Policies for traffic originating from the Internet zone (net) # Drop (ignore) all connection requests from the Internet to the firewall # net all DROP info # THE FOLLOWING POLICY MUST BE LAST # Reject all other connection requests all all REJECT info #LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE

    Read the article

  • How to fix UNMOUNTABLE_BOOT_VOLUME (0x000000ED) on my Windows XP DELL laptop?

    - by Neil
    I have a Dell Latitude D410. Running Windows XP. I am receiving the STOP: 0x000000ED (0X899CF030,0XC0000185,0X00000000,0X00000000) Blue screen. Initially, I tried everything specified with the Microsoft KB articles. At this time, I was able to boot into the general safemode. I pulled the hard drive and was able to run chkdsk on it- it noted that it had fixed some errors, but I was still unable to boot. I put a brand new hard drive in the laptop. Windows XP installation worked up until the reboot, at which time the exact same error message came back up. What I have tried (all since the new hard drive was installed): chkdsk /R All suggested solutions in Microsoft KB articles Reseating RAM Opened laptop, reseated all connectors, looked for signs of damage (saw none) Reset BIOS options to default Ran the basic Dell diagnostics I have looked at the current entry:How can I boot XP after receiving stop error 0x000000ED - I am currently in the process of downloading the Ultimate Boot CD to use as a test, but I am not holding out a lot of hope as I really doubt this brand new Hard Drive is bad. Can anyone think of other areas I am missing? Ran MEMTEST86+ V4.10 for 15 passes (overnight). 0 Errors EDIT: FORMATTING

    Read the article

  • Mysterious swap usage on EC2

    - by rusty
    We're in the middle of a project to move our infrastructure from a co-lo situation into Amazon EC2 and we've noticed some weird memory characteristics of the processes in our setup. Without going into too much detail about the specifics of our processes, we've noticed that on our EC2 instances "top" will show processes using a lot of swap space -- in fact, much greater than the amount of available swap or (if you add it all up) more than the available disk. Here's a sample top output: Mem: 7136868k total, 5272300k used, 1864568k free, 256876k buffers Swap: 1048572k total, 0k used, 1048572k free, 2526504k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 4121 jboss 20 0 5913m 603m 14m S 0.7 8.7 3:59.90 5.2g java 22730 root 20 0 2394m 4012 1976 S 2.0 0.1 4:20.57 2.3g PassengerHelper 20564 rails 20 0 2539m 220m 9828 S 0.3 3.2 0:23.58 2.3g java 1423 nscd 20 0 877m 1464 972 S 0.0 0.0 0:03.89 876m nscd You can see, for instance, that jboss is reportedly using 5.2 gigs of swap space which is definitely impossible since there's only 1G allocated and none is being used (probably because there's still 1.8G of RAM free). And here's the results of uname -a: Linux xxx.yyy.zzz 2.6.35.14-106.53.amzn1.x86_64 #1 SMP Fri Jan 6 16:20:10 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux We're running an AMI based off of the default Amazon Linux AMI (Amazon Linux AMI release 2011.09, so some RHEL5 and RHEL 6) with not too many customizations and definitely no kernel-level customizations. Something here tells me that on this particular kernel/distribution, the reporting of swap or maybe even total memory usage isn't what it appears to be... Any help would be appreciated!

    Read the article

  • Why is my rsync so slow?

    - by iblue
    My Laptop and my workstation are both connected to a Gigabit Switch. Both are running Linux. But when I copy files with rsync, it performs badly. I get about 22 MB/s. Shouldn't I theoretically get about 125 MB/s? What is the limiting factor here? EDIT: I conducted some experiments. Write performance on the laptop The laptop has a xfs filesystem with full disk encryption. It uses aes-cbc-essiv:sha256 cipher mode with 256 bits key length. Disk write performance is 58.8 MB/s. iblue@nerdpol:~$ LANG=C dd if=/dev/zero of=test.img bs=1M count=1024 1073741824 Bytes (1.1 GB) copied, 18.2735 s, 58.8 MB/s Read performance on the workstation The files I copied are on a software RAID-5 over 5 HDDs. On top of the raid is a lvm. The volume itself is encrypted with the same cipher. The workstation has a FX-8150 cpu that has a native AES-NI instruction set which speeds up encryption. Disk read performance is 256 MB/s (cache was cold). iblue@raven:/mnt/bytemachine/imgs$ dd if=backup-1333796266.tar.bz2 of=/dev/null bs=1M 10213172008 bytes (10 GB) copied, 39.8882 s, 256 MB/s Network performance I ran iperf between the two clients. Network performance is 939 Mbit/s iblue@raven $ iperf -c 94.135.XXX ------------------------------------------------------------ Client connecting to 94.135.XXX, TCP port 5001 TCP window size: 23.2 KByte (default) ------------------------------------------------------------ [ 3] local 94.135.XXX port 59385 connected with 94.135.YYY port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec

    Read the article

  • Gigabit network limited to 25MB/s by CPU. How to make it faster?

    - by netvope
    I have a Acer Aspire R1600-U910H with a nForce gigabit network adapter. The maximum TCP throughput of it is about 25MB/s, and apparently it is limited by the single core Intel Atom 230; when the maximum throughput is reached, the CPU usage is about 50%-60%, which corresponds to full utilization considering this is a Hyper-threading enabled CPU. The same problem occurs on both Windows XP and on Ubuntu 8.04. On Windows, I have installed the latest nForce chipset driver, disabled power saving features, and enabled checksum offload. On Linux, the default driver has checksum offload enabled. There is no Linux driver available on Nvidia's website. ethtool -k eth0 shows that checksum offload is enabled: Offload parameters for eth0: rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off The following is the output of powertop when the network is idle: Wakeups-from-idle per second : 61.9 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 90.9% (101.3) <interrupt> : eth0 4.5% ( 5.0) iftop : schedule_timeout (process_timeout) 1.8% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.9% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.5% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) And when the maximum throughput of about 25MB/s is reached: Wakeups-from-idle per second : 11175.5 interval: 10.0s no ACPI power usage estimate available Top causes for wakeups: 99.9% (22097.4) <interrupt> : eth0 0.0% ( 5.0) iftop : schedule_timeout (process_timeout) 0.0% ( 2.0) <kernel core> : clocksource_register (clocksource_watchdog) 0.0% ( 1.0) dhcdbd : schedule_timeout (process_timeout) 0.0% ( 0.6) <kernel core> : neigh_table_init_no_netlink (neigh_periodic_timer) Notice the 20000 interrupts per second. Could this be the cause for the high CPU usage and low throughput? If so, how can I improve the situation? The other computers in the network can usually transfer at 50+MB/s without problems. And a minor question: How can I find out what is the driver in use for eth0?

    Read the article

  • Host is missing hostname and/or domain

    - by anlawang
    i use puppet 0.25.4 on ubuntu 10.04,when puppet installed ,i got the infor below : Nov 29 10:30:30 puppet puppetmasterd[4422]: Host is missing hostname and/or domain: pclient.example.com Nov 29 10:30:30 puppet puppetmasterd[4422]: Compiled catalog for pclient.example.com in 0.02 seconds i dont know how to fix it ,who can help me thank you ! my configuration : I use apt-get to install the puppet,so some configuration have been fixed puppet.conf on client : > [main] server=puppet.example.com > logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=false > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post > certname=pclient.example.com > node_name=cert [puppetd] > runinterval=30 puppet.conf on server: > [main] logdir=/var/log/puppet > vardir=/var/lib/puppet > ssldir=/var/lib/puppet/ssl > rundir=/var/run/puppet > factpath=$vardir/lib/facter > pluginsync=true > templatedir=$confdir/templates > prerun_command=/etc/puppet/etckeeper-commit-pre > postrun_command=/etc/puppet/etckeeper-commit-post i user the default node on site.pp i am a newer to puppet,so i dont know the reason for these problems!! thank you again!!!

    Read the article

  • ubuntu suspend works, but then immediately starts

    - by Yoav Aner
    Having a strange problem after upgrading from Ubuntu 11.04 to 12.04. Previously I could suspend just fine, the computer will switch itself off. Pressing the ON button will switch it on and it will resume. After upgrading to 12.04 however, when I suspend it does (almost) the same, turns itself off, but about 2 seconds later, the computer turns itself on again, and it goes back to life from suspend. I haven't changed any of the hardware or BIOS and it was working before just fine. Also tried every possible switch of pm-suspend ; setting acpi_sleep=nonvs in /etc/default/grub and also this suggestion but nothing seems to make a difference... UPDATE: just tested suspend using the 12.04 liveCD and it was working perfectly fine... but when I boot normally it doesn't. ANOTHER UPDATE: After re-installing I noticed that I can suspend. However, after restoring my home folder - the strange suspend problem happens again. I then created a new user account. When I login to the other account I can suspend without a problem... So This seems specific to my account only. How/What can cause this in my own user settings?

    Read the article

  • How can I determine the IP addresses allocated by DHCP on a router that I'm connected to?

    - by user234831
    This "router" is not a typical situation. I'm using my phone as a hotspot and can only configure a select number of DHCP options. I can manage the limit on how many devices/clients can use my phone as a hotspot. I have to select from a radio-button list with the options: 2,3,4,5, or 8 I can specify the DHCP starting IP address. In this case, it begins at 192.168.6.106 When I'm connected via WIFI to my phone, an ipconfig /all command shows me that the default gateway is 192.168.6.1 and my IPv4 address is 192.168.1.148. I have the luxury of connecting another device to the phone and that device was assigned 192.168.1.121. I've tried connecting to 192.168.6.1, hoping for some sort of router setup page that I'm used to seeing, but there is no such thing or maybe it's just a matter of incompatable operating systems. In summary, the "router" (phone) has an IP address of 192.168.6.1 and a DHCP server that begins at 192.168.6.106 and allows up to 8 connections. Normally, I would assume a range of 192.168.6.106 - 192.168.6.113, but connected clients are showing otherwise. How can I figure out which IP addresses are set aside by DHCP for clients?

    Read the article

  • Why Are SPF Records Failing?

    - by robobobobo
    Ok I've been going through various different sites, resources and topics here trying to figure out what is wrong with my SPF records but no matter what I do they don't seem to pass. Here's what I have "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" I've tried multiple different tools to check my spf records, some give me a pass, some don't. But I can't send mail to certain google app accounts, they just bounce back all the time which is very annoying. Anyone got any ideas? I have noticed that the source IP address is not the IPV4 addresses I've defined, but Cpanel wouldn't let me add that address into it.. And here's the result of tests I'm getting back from port25.com. I'm running WHM by the way and have enabled spf and dkim. Summary of Results SPF check: fail DomainKeys check: neutral DKIM check: pass Sender-ID check: fail SpamAssassin check: ham Details: HELO hostname: server1.viralbamboo.com Source IP: 2a01:258:f000:6:216:3eff:fe87:9379 mail-from: ###@viralbamboo.com SPF check details: Result: fail (not permitted) ID(s) verified: smtp.mailfrom=###@viralbamboo.com DNS record(s): viralbamboo.com. SPF (no records) viralbamboo.com. 13180 IN TXT "v=spf1 +a +mx +ip4:217.78.0.92 +ip4:217.78.0.95 -all" viralbamboo.com. AAAA (no records) viralbamboo.com. 13180 IN MX 0 viralbamboo.com. viralbamboo.com. AAAA (no records) DomainKeys check details: Result: neutral (message not signed) ID(s) verified: header.From=###@viralbamboo.com DNS record(s): DKIM check details: Result: pass (matches From: ###@viralbamboo.com). ID(s) verified: header.d=viralbamboo.com Canonicalized Headers: content-type:multipart/alternative;'20'boundary="4783D1BE-5685-41CF-B91B-1F15E91DD1E3"'0D''0A' date:Mon,'20'1'20'Jul'20'2013'20'21:30:47'20'+0000'0D''0A' subject:=?utf-8?Q?test?='0D''0A' to:"[email protected]?="'20''0D''0A' from:=?utf-8?Q?Rob_Boland_-_Viralbamboo?='20'<###@viralbamboo.com'0D''0A' mime-version:1.0'0D''0A' dkim-signature:v=1;'20'a=rsa-sha256;'20'q=dns/txt;'20'c=relaxed/relaxed;'20'd=viralbamboo.com;'20's=default;'20'h=Content-Type:Date:Subject:To:From:MIME-Version;'20'bh=CJMO7HYeyNVGvxttf/JspIMoLUiWNE6nlQUg5WjTGZQ=;'20'b=;

    Read the article

  • linux networking: how to redirect incoming connections from old server to new server?

    - by aliz
    hi I'm in the process of moving my old server to a new server, but i will keep the old server running for database replication and load balancing, etc. each server has a separate internet connection with a static ip, and they are connected through a local Ethernet connection. I've got Ubuntu 8.04 32-bit running on old server and Debian 6.0 64-bit on new one. shorewall firewall is installed on both servers. there are some outdoor devices which are periodically sending data to port 43597 for old server IP address. I can run multiple instances of the network service which is responsible for receiving data from devices on a server but on different ports. here's the question: how can I run the service on new server and have connections coming to old server redirected to it, and new devices can still connect to new server's IP address preferably on the same port and same service? until all devices get updated to send to new server. I've tried a shorewall DNAT rule, but seems like new server's default route should be changed to ethernet connection, which breaks other things. I also found about redir utility, but still haven't tried it. is there any best practice or simple solution for such a scenario, i'm not aware of? thanks in advance.

    Read the article

< Previous Page | 684 685 686 687 688 689 690 691 692 693 694 695  | Next Page >