Search Results

Search found 8547 results on 342 pages for 'hash join'.

Page 242/342 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • RTorrent stops my torrents, crashes, and I have to manually re-add torrents and start them. How can I stop this cycle of doom?

    - by meder
    I cannot use transmission which is the best torrent client because it's banned from one of the trackers I use, so I am forced to use rtorrent. Normally I am all for command-line programs, however rtorrent ( 0.8.6/0.12.6 ) is simply frustrating. It is not intuitive, imo. I have 400 MB left on the HD and that's more than enough to dl this 200 MB avi. Rtorrent stops the download, though. It says [CLOSED] near the torrent. I do ctrl-r and that invokes the local hash check, and after that's done rtorrent simply dies ( wtf? ). Afterwards, it gives me rtorrent: TrackerManager::send_later() m_control->set() == DownloadInfo::STOPPED. So that leads me to open rtorrent again, then hit ENTER and /home/meder/file.avi.torrent, down arrow, and ctrl-S. I am looking for multiple things... How can I tell rtorrent to not worry about disk space? Again, it stops the torrent if my HD only has 400 mb when the torrent I'm dling is 200 mb ( there are no other torrents ). Why does ctrl-R fail hard? Why does it cause rtorrent to crash? If #2 is not solvable, can someone provide an easy way to add a torrent and start it, a more efficient method than typing the torrent name, hitting the down arrow, and ctrl-S?

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • "Can't find root filesystem / error mounting /dev/root" when booting to new kernel

    - by salparadise
    I am trying to upgrade my kernel from 2.6.18-274 to 2.6.39 for some wireless card drivers. When I boot into the new kernel I get the "Can't find root filesystem / error mounting /dev/root" googling led me to this page http://fedoraproject.org/wiki/Common_kernel_problems#Can.27t_find_root_filesystem_.2F_error_mounting_.2Fdev.2Froot From what I am reading seems to be an issue with a driver for my SATA controller or HD, but I can't find what option I need to add to the kernel. Doing a diff from the old initrd to the new one gives me the following: root-> diff /tmp/kafter /tmp/kbefore 6a7,8 > lib/dm-message.ko > lib/dm-region_hash.ko 8a11 > lib/dm-raid45.ko 13d15 < lib/dm-region-hash.ko 16a19 > lib/dm-mem-cache.ko Do I need any of those? not sure if I would need dm-raid45.ko as I am not running a raid. I have the same SATA and IDE options configured for both kernels so not sure what else to look for, any help is appreciated. Additionally here is the HW info: 00:1f.2 IDE interface: Intel Corporation 82801FB/FW (ICH6/ICH6W) SATA Controller (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: Hewlett-Packard Company Unknown device 3006 Flags: bus master, 66MHz, medium devsel, latency 0, IRQ 233 I/O ports at 1818 [size=8] I/O ports at 1830 [size=4] I/O ports at 1820 [size=8] I/O ports at 1834 [size=4] I/O ports at 14f0 [size=16] Capabilities: [70] Power Management version 2 root-> smartctl -a /dev/sda ... === START OF INFORMATION SECTION === Device Model: WDC WD5000AADS-00S9B0

    Read the article

  • Problems Installing slapd On Ubuntu Server 11.10

    - by Zach Dziura
    I know that there's a Ubuntu-specific StackExchange website, but I thought that I'd ask here because it's a server-specific question. If I'm wrong in my logic... Well, you people are better at this than I am! O=) On with the show! I'm in the process of installing Oracle Database 11g R2 Standard Edition onto Ubuntu Server 11.10. I found a guide on the Oracle Support Forums that walks you through the process fairly easily. Unfortunately, I'm running into issues installing one particular dependency: slapd. When I go to install it, I get this error message: (Reading database ... 64726 files and directories currently installed.) Unpacking slapd (from .../slapd_2.4.25-1.1ubuntu4.1_amd64.deb) ... Processing triggers for man-db ... Processing triggers for ufw ... Processing triggers for ureadahead ... Setting up slapd (2.4.25-1.1ubuntu4.1) ... Usage: slappasswd [options] -c format crypt(3) salt format -g generate random password -h hash password scheme -n omit trailing newline -s secret new password -u generate RFC2307 values (default) -v increase verbosity -T file read file for new password Creating initial configuration... Loading the initial configuration from the ldif file () failed with the following error while running slapadd: str2entry: invalid value for attributeType olcRootPW #0 (syntax 1.3.6.1.4.1.1466.115.121.1.15) slapadd: could not parse entry (line=1051) dpkg: error processing slapd (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: slapd E: Sub-process /usr/bin/dpkg returned an error code (1) After much Google searches and forum trolling, I have yet to find a definitive answer as to what's going wrong. The error messages seem straight forward enough, but I have no idea how to debug this. Can anyone offer some assistance? Again, if I'm asking in the wrong place, I apologize. If I'm indeed asking properly, then thank you for any and all help!

    Read the article

  • Copying files between linux machines with strong authentication but without encryption

    - by Zizzencs
    I'm looking for a suitable program to copy files from one linux machine to another one. The program should be able to do authentication but it should not do encryption. The reason behind the latter is the lack of CPU power to do the encryption. I copy backups from ~70 machines to a single backup server simultaneously. The single server is an HP Proliant DL360 G7, with 10 Gbps ethernet connection and an FC storage backend that can do 4 Gbps. Through FTP I can write ~400MB/sec to the storage (that's about what I want) but through ssh with arcfour I can only do ~100MB/sec while having 100% CPU usage. That's why I want file transfers not to be encrypted. The alternatives that I found not really suitable: rcp: no authentication, forget it FTP: making the authentication "secure" (at least preventing plain-text password exchange) is possible but not really easy and I haven't found a method to force any FTP daemon to encrypt the control channel (for the authentication) and not to encrypt the data channel (for data transfers) SCP/SFTP: in farely recent ssh(d) implementations you can't turn off encryption. The best you can do is to use the arcfour cypher for the encryption but it sill uses too much CPU power for my needs. rsync over ssh: same problems as with SCP/SFTP. plain rsync: from the documentation of rsyncd: "The authentication protocol used in rsync is a 128 bit MD4 based challenge response system. This is fairly weak protection, though (with at least one brute-force hash-finding algorithm publicly available), so if you want really top-quality security, then I recommend that you run rsync over ssh." It's a no-go. Is there a protocol/program that can do exactly what I want? (A big plus would be if it could work on windows as well and/or if it would support rsync-stlye copying/synchronization (e.g. copy only the differences).)

    Read the article

  • Postfix additional transports - is it working?

    - by threecheeseopera
    I have enabled two additional transports in my postfix config to deal with recipient domains that demand connection limiting, per the instructions here at serverfault. However, I have no idea if this is working or not; in fact, I think it is not working, due to the send speeds I am seeing in the logs. How might I determine if my additional transports are working? If they aren't, do you have any tips on figuring out why? And, do you have any comments on my particular configuration? (am I a bucket of fail?) I have enabled the additional transports in master.cf: smtp inet n - - - - smtpd careful unix - - n - 10 smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 cautious unix - - n - - smtp -o smtp_connect_timeout=5 -o smtp_helo_timeout=5 I have set up the transport mapping file /etc/postfix/transport: hotmail.com cautious: yahoo.com careful: gmail.com cautious: earthlink.net cautious: msn.com cautious: live.com cautious: aol.com careful: I have set up the transport mapping and some connection-limiting settings in main.cf: transport_maps = hash:/etc/postfix/transport careful_initial_destination_concurrency = 5 careful_destination_concurrency_limit = 10 cautious_destination_concurrency_limit = 50 Finally, I have run converted the transport file to a db per the postfix docs: #> postmap /etc/postfix/transport And then restarted postfix. I do see my transport_maps setting when I run postconf, but I do not see any of the transport-specific settings ('careful_xxx_yyy_zzz'). Also the mail logs do not appear to be different in any way to what they were previously. Thanks!!!

    Read the article

  • Are there any viable DNS or LDAP alternatives for distributed key/value storage and retrieval?

    - by makerofthings7
    I'm working on a software app that needs distributed decentralized name resolution, and isn't bound to TCP/IP. Or more precisely, I need to store a "key" and look up it's value, and the key may be a string, a number, or any other realistic data type. Examples: With a phone number, look up a name. (or with an area code, redirect to the server that handles that exchange) With an IP Address get a DNS name, or a Whois contact (string value) With a string, get an IP, ( like a DNS TXT or SRV record). I'm thinking out of the box here and looking for any software that allows for this. (more info below) Are there any secure, scalable DNS alternatives that have gained notoriety? I could ask on StackOverflow, but think the infrastructure groups would have better insight on this. Edit More info: I'm looking at "Namecoin" the DNS version of Bitcoin, and since that project is faltering, I'm looking at alternative ways to store name-value pairs, with an optional qualifier. I think a name value pair is of global interest is useful, but on a limited scale. Namecoin tried to be too much, and ended up becoming nothing. I'm trying to solve that problem in researching alternatives and applying distributed technologies where applicable. Bitcoin/Namecoin offers a Distributed Hash Table, which has some positive aspects, but not useful for DNS, except for root servers.

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • IP tables blocking access to most hosts but some accesses being logged

    - by epo
    What am I getting wrong? A while back I locked down my web hosting service while hardening it or at least trying to. Apache listens on port 80 only and I set up iptables using the following: IPS="list of IPs" iptables --new-chain webtest # Accept all established connections iptables -A INPUT --protocol tcp --dport 80 --jump webtest iptables -A INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT iptables -A webtest --match state --state ESTABLISHED,RELATED --jump ACCEPT for ip in $IPS; do iptables -A webtest --match state --state NEW --source $ip --jump ACCEPT done iptables -A webtest --jump DROP However looking at my apache logs I notice various log entries in access_log, e.g. 221.192.199.35 - - [16/May/2010:13:04:31 +0100] "GET http://www.wantsfly.com/prx2.php?hash=926DE27C156B40E55E4CFC8F005053E2D81E6D688AF0 HTTP/1.0" 404 206 "-" "Mozilla/ 4.0 (compatible; MSIE 6.0; Windows NT 5.0)" 201.228.144.124 - - [16/May/2010:11:54:16 +0100] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 226 "-" "-" 207.46.195.224 - - [16/May/2010:04:06:48 +0100] "GET /robots.txt HTTP/1.1" 200 311 "-" "msnbot/2.0b (+http://search.msn.com/msnbot.htm)" How are these slipping through? I don't mind the indexing bots (though I am a little surprised to see them get through). I suppose they must be getting through using the ESTABLISHED,RELATED rules. And no, I can't for the life of me remember why the first match state rule is there So 2 questions: is there a better way to set up iptables to restrict access to specified hosts? How exactly are these 3 examples slipping through?

    Read the article

  • Windows service fails to start with local user until password is entered again in logon tab

    - by Nick
    Basically we have a service where we use a local account as its logon. it has all the proper permissions, and everything is working fine, service starts and runs and all is good. Then one day, after rebooting, the service fails to start. Logs show incorrect password. Our technicians resolve the issue by simply retyping the password into the "Log On" tab from the services.msc. Unfortunately we have not been able to root cause. I suspect that the password that is stored for the service is lost somehow. Does anyone know where the password hash might be stored so we can check it? The only activities that seem to be possibly related are patching with Microsoft security patches, but we have multiple servers running the same service, and we have never seen more than one at a time, and its usually a different one each time when this occurrs. I believe this to be the same issue as this: Windows service fails to start with custom user until started once with local user But i was unable to add comments, and its really old.

    Read the article

  • Postfix : outgoing mail in TLS for a specific domain

    - by vercetty92
    I am trying to configure postfix to send mail in TLS (starttls in fact), but only for a specific destination. I tried with "smtp_tls_policy_maps". This is the only line in my main.cf file regarding TLS configuration, but it seems not working. Here is my main.cf file: queue_directory = /opt/csw/var/spool/postfix command_directory = /opt/csw/sbin daemon_directory = /opt/csw/libexec/postfix html_directory = /opt/csw/share/doc/postfix/html manpage_directory = /opt/csw/share/man sample_directory = /opt/csw/share/doc/postfix/samples readme_directory = /opt/csw/share/doc/postfix/README_FILES mail_spool_directory = /var/spool/mail sendmail_path = /opt/csw/sbin/sendmail newaliases_path = /opt/csw/bin/newaliases mailq_path = /opt/csw/bin/mailq mail_owner = postfix setgid_group = postdrop mydomain = ullink.net myorigin = $myhostname mydestination = $myhostname, localhost.$mydomain, localhost masquerade_domains = vercetty92.net alias_maps = dbm:/etc/opt/csw/postfix/aliases alias_database = dbm:/etc/opt/csw/postfix/aliases transport_maps = dbm:/etc/opt/csw/postfix/transport smtp_tls_policy_maps = dbm:/etc/opt/csw/postfix/tls_policy inet_interfaces = all unknown_local_recipient_reject_code = 550 relayhost = smtpd_banner = $myhostname ESMTP $mail_name debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin xxgdb $daemon_directory/$process_name $process_id & sleep 5 And here is my "tls_policy" file: gmail.com encrypt protocols=SSLv3:TLSv1 ciphers=high I also tried gmail.com encrypt My wish is to use TLS only for the gmail domain. With this configuration, I don't see any TLS line in the source of the mail. But if I tell postfix to use TLS if possible for all destination with this line, it works: smtp_tls_security_level = may Beause I can see this line in the source of my mail: (version=TLSv1/SSLv3 cipher=OTHER); But I don't want to try to use TLS for the others domains...only for gmail... Do I miss something in my conf? (I also try whith "hash:/etc/opt/csw/postfix/tls_policy", and it's the same) Thanks a lot in advance

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • Sendmail Configuration for Exchange Server

    - by user119720
    i need help for sendmail configuration in our linux machine. Here the things: I want to send email to outside by using our exchange server as the mail relay.But when sending the email through the server,it will response "user unknown".To make it worse, it will bounce back all the sent message to my localhost. I already tested our configuration by using external mail server such as gmail and yahoo,the configuration is working without any issue and the email can be sent to the recipient.Most of the configuration of my sendmail is based on here. authinfo file : AuthInfo:my_exchange_server "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" AuthInfo:my_exchange_server:587 "U:my_name" "I:my_email" "P:my_passwd" "M:PLAIN LOGIN" sendmail.mc : FEATURE(authinfo,hash /etc/mail/authinfo.db) define(`SMART_HOST', `my_exchange server')dnl define('RELAY_MAILER_ARGS', 'TCP $h 587') define('ESMTP_MAILER_ARGS', 'TCP $h 587') define('confCACERT_PATH', '/usr/share/ssl/certs') define('confCACET','/usr/share/ssl/certs/ca-bundle.crt') define('confSERVER_CERT','/usr/share/ssl/certs/sendmail.pem') define('confSERVER_KEY','/usr/share/ssl/certs/sendmail.pem') define('confAUTH_MECHANISMS', 'EXTERNAL GSSAPI DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') TRUST_AUTH_MECH('EXTERNAL DIGEST-MD5 CRAM-MD5 LOGIN PLAIN') define('confAUTH_OPTIONS, 'A')dnl My first assumptions the problem occur is due to the authentication problem, as exchange server need encrypted authentication (DIGEST-MD5).I have already changed this in the authinfo file (from plain login to digest-md5 login) but still not working. I also can telnet our exchange server.So the port is not being blocked by firewall. Can someone help me out with this problems?I'm really at wits ends. Thanks.

    Read the article

  • Why database partitioning didn't work? Extract from thedailywtf.com

    - by questzen
    Original link. http://thedailywtf.com/Articles/The-Certified-DBA.aspx. Article summary: The DBA suggests an approach involving rigorous partitioning, 10 partitions per disk (3 actual disks and 3 raid). The stats show that the performance is non-optimal. Then the DBA suggests an alternative of 1 partition per disk (with more added disks). This also fails. The sys-admin then sets up a single disk, single partition and saves the day. The size of disks was not mentioned but given today,s typical disk sizes (of the order of 100 GB), the partitions ; would be huge, it surprises me that a single disk with all partitions outperformed. Initially I suspect that the data was segregated and hence faster reads. But how come the performance didn't degrade as time went by with all the inserts and updates happening? Saw this on reddit, but the explanation was by far spindle/platter centered. There was no mention in the article about this. Is there any other reason? I can only guess that the tables were using a incorrect hash distribution causing non-uniform allocation across disks (wrong partitioning); this would increase fetch times. Any thoughts?

    Read the article

  • Finding Webserver Vulnerability

    - by Brent
    We operate a webserver farm hosting around 300 websites. Yesterday morning a script placed .htaccess files owned by www-data (the apache user) in every directory under the document_root of most (but not all) sites. The content of the .htaccess file was this: RewriteEngine On RewriteCond %{HTTP_REFERER} ^http:// RewriteCond %{HTTP_REFERER} !%{HTTP_HOST} RewriteRule . http://84f6a4eef61784b33e4acbd32c8fdd72.com/%{REMOTE_ADDR} Googling for that url (which is the md5 hash of "antivirus") I discovered that this same thing happened all over the internet, and am looking for somebody who has already dealt with this, and determined where the vulnerability is. I have searched most of our logs, but haven't found anything conclusive yet. Are there others who experienced the same thing that have gotten further than I have in pinpointing the hole? So far we have determined: the changes were made as www-data, so apache or it's plugins are likely the culprit all the changes were made within 15 minutes of each other, so it was probably automated since our websites have widely varying domain names, I think a single vulnerability on one site was responsible (rather than a common vulnerability on every site) if an .htaccess file already existed and was writeable by www-data, then the script was kind, and simply appended the above lines to the end of the file (making it easy to reverse) Any more hints would be appreciated.

    Read the article

  • Expire Files In A Folder: Delete Files After x Days

    - by Brett G
    I'm looking to make a "Drop Folder" in a windows shared drive that is accessible to everyone. I'd like files to be deleted automagically if they sit in the folder for more than X days. However, it seems like all methods I've found to do this, use the last modified date, last access time, or creation date of a file. I'm trying to make this a folder that a user can drop files in to share with somebody. If someone copies or moves files into here, I'd like the clock to start ticking at this point. However, the last modified date and creation date of a file will not be updated unless someone actually modifies the file. The last access time is updated too frequently... it seems that just opening a directory in windows explorer will update the last access time. Anyone know of a solution to this? I'm thinking that cataloging the hash of files on a daily basis and then expiring files based on hashes older than a certain date might be a solution.... but taking hashes of files can be time consuming. Any ideas would be greatly appreciated! Note: I've already looked at quite a lot of answers on here... looked into File Server Resource Monitor, powershell scripts, batch scripts, etc. They still use the last access time, last modified time or creation time... which, as described, do not fit the above needs.

    Read the article

  • please explain these mongo statistics

    - by sivann
    My setup: I have 2 hosts, and 2 shards each. Host1 has 2 shards, and is the master of the replicas host2 has the secondaries of the 2 shards. . host1: shard1 (repset1),shard2 (repset2) host2: shard1 (repset1),shard2 (repset2) There's also a 3rd host that acts as arbitrer. I have 50 threads writing randomly to both shards (using a hash) via mongos with REPLICA_SAFE WriteConcern set on each insert. The questions: mongostat displays about 90% locked for both shards in host1 and about 1% locked on host2. Since I use REPLICA_SAFE which supposedly writes to both servers shouldn't the locks be the same? mongostat reports qr=30 for both shards of host1, and qw=0 always. Since I perform only writes how is this possible? Moreover on host2 all queues are reported 0. Faults are abut the same in all shards/hosts (arround 80). netIn/netOut on the secondaries (host2) are always about 200bytes/sec. Too low. mongos has 53 connections, host1's shards have 71 and 71 and host2's shards have 9 and 8. How is this? Please answer whatever you can. Thanks!

    Read the article

  • How many guesses per second are possible against an encrypted disk? [closed]

    - by HappyDeveloper
    I understand that guesses per second depends on the hardware and the encryption algorithm, so I don't expect an absolute number as answer. For example, with an average machine you can make a lot (thousands?) of guesses per second for a hash created with a single md5 round, because md5 is fast, making brute force and dictionary attacks a real danger for most passwords. But if instead you use bcrypt with enough rounds, you can slow the attack down to 1 guess per second, for example. 1) So how does disk encryption usually work? This is how I imagine it, tell me if it is close to reality: When I enter the passphrase, it is hashed with a slow algorithm to generate a key (always the same?). Because this is slow, brute force is not a good approach to break it. Then, with the generated key, the disk is unencrypted on the fly very fast, so there is not a significant performance lose. 2) How can I test this with my own machine? I want to calculate the guesses per second my machine can make. 3) How many guesses per second are possible against an encrypted disk with the fastest PC ever so far?

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • "Enumeration yielded no results" When using Query Syntax in C#

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • Is there any gmap's api function to concatenate address string from AddressDetails structure?

    - by Vadim
    Hello! I’am using Google Map’s GClientGeocoder for reversing map coordinates into string address. Exactly as shown in google’s example here http://code.google.com/apis/ajax/playground/?exp=maps#geocoding_reverse But, I would like to extract LocalityName (place.AddressDetails.Country.AdministrativeArea.Locality.LocalityName) from place.address. The straight way will be join all AddressDetails elements, excluding LocalityName. However order of the structure elements in final string representation is depends from geographical location. For example: Order for Australia city: ThoroughfareName + “, ” + LocalityName + “ ” + AdministrativeAreaName + “ ” + PostalCodeNumber + “, ” + CountryName Order for Russian city: CountryName + “, ” + PostalCodeNumber + “, ” + LocalityName + “, ” +ThoroughfareName Moreover PostalCodeNumber was not supplied in AddressDetails for the last example. Please, help!

    Read the article

  • Fluent NHibernate: mapping complex many-to-many (with additional columns) and setting fetch

    - by HackedByChinese
    I need a Fluent NHibernate mapping that will fulfill the following (if nothing else, I'll also take the appropriate NHibernate XML mapping and reverse engineer it). DETAILS I have a many-to-many relationship between two entities: Parent and Child. That is accomplished by an additional table to store the identities of the Parent and Child. However, I also need to define two additional columns on that mapping that provide more information about the relationship. This is roughly how I've defined my types, at least the relevant parts (where Entity is some base type that provides an Id property and checks for equivalence based on that Id): public class Parent : Entity { public virtual IList<ParentChildRelationship> Children { get; protected set; } public virtual void AddChildRelationship(Child child, int customerId) { var relationship = new ParentChildRelationship { CustomerId = customerId, Parent = this, Child = child }; if (Children == null) Children = new List<ParentChildRelationship>(); if (Children.Contains(relationship)) return; relationship.Sequence = Children.Count; Children.Add(relationship); } } public class Child : Entity { // child doesn't care about its relationships } public class ParentChildRelationship { public int CustomerId { get; set; } public Parent Parent { get; set; } public Child Child { get; set; } public int Sequence { get; set; } public override bool Equals(object obj) { if (ReferenceEquals(null, obj)) return false; if (ReferenceEquals(this, obj)) return true; var other = obj as ParentChildRelationship; if (return other == null) return false; return (CustomerId == other.CustomerId && Parent == other.Parent && Child == other.Child); } public override int GetHashCode() { unchecked { int result = CustomerId; result = Parent == null ? 0 : (result*397) ^ Parent.GetHashCode(); result = Child == null ? 0 : (result*397) ^ Child.GetHashCode(); return result; } } } The tables in the database look approximately like (assume primary/foreign keys and forgive syntax): create table Parent ( id int identity(1,1) not null ) create table Child ( id int identity(1,1) not null ) create table ParentChildRelationship ( customerId int not null, parent_id int not null, child_id int not null, sequence int not null ) I'm OK with Parent.Children being a lazy loaded property. However, the ParentChildRelationship should eager load ParentChildRelationship.Child. Furthermore, I want to use a Join when I eager load. The SQL, when accessing Parent.Children, NHibernate should generate an equivalent query to: SELECT * FROM ParentChildRelationship rel LEFT OUTER JOIN Child ch ON rel.child_id = ch.id WHERE parent_id = ? OK, so to do that I have mappings that look like this: ParentMap : ClassMap<Parent> { public ParentMap() { Table("Parent"); Id(c => c.Id).GeneratedBy.Identity(); HasMany(c => c.Children).KeyColumn("parent_id"); } } ChildMap : ClassMap<Child> { public ChildMap() { Table("Child"); Id(c => c.Id).GeneratedBy.Identity(); } } ParentChildRelationshipMap : ClassMap<ParentChildRelationship> { public ParentChildRelationshipMap() { Table("ParentChildRelationship"); CompositeId() .KeyProperty(c => c.CustomerId, "customerId") .KeyReference(c => c.Parent, "parent_id") .KeyReference(c => c.Child, "child_id"); Map(c => c.Sequence).Not.Nullable(); } } So, in my test if i try to get myParentRepo.Get(1).Children, it does in fact get me all the relationships and, as I access them from the relationship, the Child objects (for example, I can grab them all by doing parent.Children.Select(r => r.Child).ToList()). However, the SQL that NHibernate is generating is inefficient. When I access parent.Children, NHIbernate does a SELECT * FROM ParentChildRelationship WHERE parent_id = 1 and then a SELECT * FROM Child WHERE id = ? for each child in each relationship. I understand why NHibernate is doing this, but I can't figure out how to set up the mapping to make NHibernate query the way I mentioned above.

    Read the article

  • Django/Mod_WSGI error: UnboundLocalError: local variable 'resolver' referenced before assignment

    - by ycseattle
    Hello, I've setup the Django with mod_wsgi and run into this error. I thought maybe the sys.path was not setup correctly but I tried everything I could think of with no luck. Any suggestions? The following is the apache2 log for the error: mod_wsgi (pid=2579): Exception occurred processing WSGI script '/home/myapp/myapp.wsgi'. Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 241, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 142, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) UnboundLocalError: local variable 'resolver' referenced before assignment The following is the content in the myapp.wsgi: import os import sys # put the Django project on sys.path sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))) os.environ["DJANGO_SETTINGS_MODULE"] = "photopier.settings" #os.environ["PYTHONPATH"]="/home" from django.core.handlers.wsgi import WSGIHandler application = WSGIHandler()

    Read the article

  • How can I eager-load a child collection mapped to a non-primary key in NHibernate 2.1.2?

    - by David Rubin
    Hi, I have two objects with a many-to-many relationship between them, as follows: public class LeftHandSide { public LeftHandSide() { Name = String.Empty; Rights = new HashSet<RightHandSide>(); } public int Id { get; set; } public string Name { get; set; } public ICollection<RightHandSide> Rights { get; set; } } public class RightHandSide { public RightHandSide() { OtherProp = String.Empty; Lefts = new HashSet<LeftHandSide>(); } public int Id { get; set; } public string OtherProp { get; set; } public ICollection<LeftHandSide> Lefts { get; set; } } and I'm using a legacy database, so my mappings look like: Notice that LeftHandSide and RightHandSide are associated by a different column than RightHandSide's primary key. <class name="LeftHandSide" table="[dbo].[lefts]" lazy="false"> <id name="Id" column="ID" unsaved-value="0"> <generator class="identity" /> </id> <property name="Name" not-null="true" /> <set name="Rights" table="[dbo].[lefts2rights]"> <key column="leftId" /> <!-- THIS IS THE IMPORTANT BIT: I MUST USE PROPERTY-REF --> <many-to-many class="RightHandSide" column="rightProp" property-ref="OtherProp" /> </set> </class> <class name="RightHandSide" table="[dbo].[rights]" lazy="false"> <id name="Id" column="id" unsaved-value="0"> <generator class="identity" /> </id> <property name="OtherProp" column="otherProp" /> <set name="Lefts" table="[dbo].[lefts2rights]"> <!-- THIS IS THE IMPORTANT BIT: I MUST USE PROPERTY-REF --> <key column="rightProp" property-ref="OtherProp" /> <many-to-many class="LeftHandSide" column="leftId" /> </set> </class> The problem comes when I go to do a query: LeftHandSide lhs = _session.CreateCriteria<LeftHandSide>() .Add(Expression.IdEq(13)) .UniqueResult<LeftHandSide>(); works just fine. But LeftHandSide lhs = _session.CreateCriteria<LeftHandSide>() .Add(Expression.IdEq(13)) .SetFetchMode("Rights", FetchMode.Join) .UniqueResult<LeftHandSide>(); throws an exception (see below). Interestingly, RightHandSide rhs = _session.CreateCriteria<RightHandSide>() .Add(Expression.IdEq(127)) .SetFetchMode("Lefts", FetchMode.Join) .UniqueResult<RightHandSide>(); seems to be perfectly fine as well. NHibernate.Exceptions.GenericADOException Message: Error performing LoadByUniqueKey[SQL: SQL not available] Source: NHibernate StackTrace: c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(563,0): at NHibernate.Type.EntityType.LoadByUniqueKey(String entityName, String uniqueKeyPropertyName, Object key, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(428,0): at NHibernate.Type.EntityType.ResolveIdentifier(Object value, ISessionImplementor session, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(300,0): at NHibernate.Type.EntityType.NullSafeGet(IDataReader rs, String[] names, ISessionImplementor session, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Collection\AbstractCollectionPersister.cs(695,0): at NHibernate.Persister.Collection.AbstractCollectionPersister.ReadElement(IDataReader rs, Object owner, String[] aliases, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Collection\Generic\PersistentGenericSet.cs(54,0): at NHibernate.Collection.Generic.PersistentGenericSet`1.ReadFrom(IDataReader rs, ICollectionPersister role, ICollectionAliases descriptor, Object owner) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(706,0): at NHibernate.Loader.Loader.ReadCollectionElement(Object optionalOwner, Object optionalKey, ICollectionPersister persister, ICollectionAliases descriptor, IDataReader rs, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(385,0): at NHibernate.Loader.Loader.ReadCollectionElements(Object[] row, IDataReader resultSet, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(326,0): at NHibernate.Loader.Loader.GetRowFromResultSet(IDataReader resultSet, ISessionImplementor session, QueryParameters queryParameters, LockMode[] lockModeArray, EntityKey optionalObjectKey, IList hydratedObjects, EntityKey[] keys, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(453,0): at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(236,0): at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1649,0): at NHibernate.Loader.Loader.DoList(ISessionImplementor session, QueryParameters queryParameters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1568,0): at NHibernate.Loader.Loader.ListIgnoreQueryCache(ISessionImplementor session, QueryParameters queryParameters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Loader.cs(1562,0): at NHibernate.Loader.Loader.List(ISessionImplementor session, QueryParameters queryParameters, ISet`1 querySpaces, IType[] resultTypes) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Loader\Criteria\CriteriaLoader.cs(73,0): at NHibernate.Loader.Criteria.CriteriaLoader.List(ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\SessionImpl.cs(1936,0): at NHibernate.Impl.SessionImpl.List(CriteriaImpl criteria, IList results) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(246,0): at NHibernate.Impl.CriteriaImpl.List(IList results) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(237,0): at NHibernate.Impl.CriteriaImpl.List() c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(398,0): at NHibernate.Impl.CriteriaImpl.UniqueResult() c:\opt\nhibernate\2.1.2\source\src\NHibernate\Impl\CriteriaImpl.cs(263,0): at NHibernate.Impl.CriteriaImpl.UniqueResult[T]() D:\proj\CMS3\branches\nh_auth\DomainModel2Tests\Authorization\TempTests.cs(46,0): at CMS.DomainModel.Authorization.TempTests.Test1() Inner Exception System.Collections.Generic.KeyNotFoundException Message: The given key was not present in the dictionary. Source: mscorlib StackTrace: at System.ThrowHelper.ThrowKeyNotFoundException() at System.Collections.Generic.Dictionary`2.get_Item(TKey key) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Entity\AbstractEntityPersister.cs(2047,0): at NHibernate.Persister.Entity.AbstractEntityPersister.GetAppropriateUniqueKeyLoader(String propertyName, IDictionary`2 enabledFilters) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Persister\Entity\AbstractEntityPersister.cs(2037,0): at NHibernate.Persister.Entity.AbstractEntityPersister.LoadByUniqueKey(String propertyName, Object uniqueKey, ISessionImplementor session) c:\opt\nhibernate\2.1.2\source\src\NHibernate\Type\EntityType.cs(552,0): at NHibernate.Type.EntityType.LoadByUniqueKey(String entityName, String uniqueKeyPropertyName, Object key, ISessionImplementor session) I'm using NHibernate 2.1.2 and I've been debugging into the NHibernate source, but I'm coming up empty. Any suggestions? Thanks so much!

    Read the article

  • pre-commit hook in svn: could not be translated from the native locale to UTF-8

    - by Alexandre Moraes
    Hi everybody, I have a problem with my pre-commit hook. This hook test if a file is locked when the user commits. When a bad condition happens, it should output that the another user is locking this file or if nobody is locking, it should show "you are not locking this file message (file´s name)". The error happens when the file´s name has some latin character like "ç" and tortoise show me this in the output. Commit failed (details follow): Commit blocked by pre-commit hook (exit code 1) with output: [Erro output could not be translated from the native locale to UTF-8.] Do you know how can I solve this? Thanks, Alexandre My shell script is here: #!/bin/sh REPOS="$1" TXN="$2" export LANG="en_US.UTF-8" /app/svn/hooks/ensure-has-need-lock.pl "$REPOS" "$TXN" if [ $? -ne 0 ]; then exit 1; fi exit 0 And my perl is here: !/usr/bin/env perl #Turn on warnings the best way depending on the Perl version. BEGIN { if ( $] >= 5.006_000) { require warnings; import warnings; } else { $^W = 1; } } use strict; use Carp; &usage unless @ARGV == 2; my $repos = shift; my $txn = shift; my $svnlook = "/usr/local/bin/svnlook"; my $user; my $ok = 1; foreach my $program ($svnlook) { if (-e $program) { unless (-x $program) { warn "$0: required program $program' is not executable, ", "edit $0.\n"; $ok = 0; } } else { warn "$0: required program $program' does not exist, edit $0.\n"; $ok = 0; } } exit 1 unless $ok; unless (-e $repos){ &usage("$0: repository directory $repos' does not exist."); } unless (-d $repos){ &usage("$0: repository directory $repos' is not a directory."); } foreach my $user_tmp (&read_from_process($svnlook, 'author', $repos, '-t', $txn)) { $user = $user_tmp; } my @errors; foreach my $transaction (&read_from_process($svnlook, 'changed', $repos, '-t', $txn)){ if ($transaction =~ /^U. (.*[^\/])$/){ my $file = $1; my $err = 0; foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ $err = 1; if($locks=~ /Owner: (.*)/){ if($1 != $user){ push @errors, "$file : You are not locking this file!"; } } } if($err==0){ push @errors, "$file : You are not locking this file!"; } } elsif($transaction =~ /^D. (.*[^\/])$/){ my $file = $1; my $tchan = &read_from_process($svnlook, 'lock', $repos, $file); foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ push @errors, "$1 : cannot delete locked Files"; } } elsif($transaction =~ /^A. (.*[^\/])$/){ my $needs_lock; my $path = $1; foreach my $prop (&read_from_process($svnlook, 'proplist', $repos, '-t', $txn, '--verbose', $path)){ if ($prop =~ /^\s*svn:needs-lock : (\S+)/){ $needs_lock = $1; } } if (not $needs_lock){ push @errors, "$path : svn:needs-lock is not set. Pleas ask TCC for support."; } } } if (@errors) { warn "$0:\n\n", join("\n", @errors), "\n\n"; exit 1; } else { exit 0; } sub usage { warn "@_\n" if @_; die "usage: $0 REPOS TXN-NAME\n"; } sub safe_read_from_pipe { unless (@_) { croak "$0: safe_read_from_pipe passed no arguments.\n"; } print "Running @_\n"; my $pid = open(SAFE_READ, '-|'); unless (defined $pid) { die "$0: cannot fork: $!\n"; } unless ($pid) { open(STDERR, ">&STDOUT") or die "$0: cannot dup STDOUT: $!\n"; exec(@_) or die "$0: cannot exec @_': $!\n"; } my @output; while (<SAFE_READ>) { chomp; push(@output, $_); } close(SAFE_READ); my $result = $?; my $exit = $result >> 8; my $signal = $result & 127; my $cd = $result & 128 ? "with core dump" : ""; if ($signal or $cd) { warn "$0: pipe from @_' failed $cd: exit=$exit signal=$signal\n"; } if (wantarray) { return ($result, @output); } else { return $result; } } sub read_from_process { unless (@_) { croak "$0: read_from_process passed no arguments.\n"; } my ($status, @output) = &safe_read_from_pipe(@_); if ($status) { if (@output) { die "$0: @_' failed with this output:\n", join("\n", @output), "\n"; } else { die "$0: @_' failed with no output.\n"; } } else { return @output; } }

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >