Search Results

Search found 22192 results on 888 pages for 'unique key'.

Page 252/888 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • INSERT INTO othertbl SELECT * tbl

    - by Harry
    Current situation: INSERT INTO othertbl SELECT * FROM tbl WHERE id = '1' So i want to copy a record from tbl to othertbl. Both tables have an autoincremented unique index. Now the new record should have a new index, rather then the value of the index of the originating record else copying results in a index not unique error. A solution would be to not use the * but since these tables have quite some columns i really think it's getting ugly. So,.. is there a better way to copy a record which results in a new record in othertbl which has a new autoincremented index without having to write out all columns in the query and using a NULL value for the index. -hope it makes sense....-

    Read the article

  • is it possible to create a multi-project template that references n number of existing projects and

    - by jcollum
    The situation: I need to create about 40+ solutions that all reference 3 projects and have one project that is unique to each one. I'd like to create a multi-project template that does this, but from what I've read it looks like it's very difficult or impossible (related SO question, but doesn't answer). I want my solution to look like this (names changed of course): These three are used by all solutions created under this "family": MyCompany.Extensions MyCompany.MyProject.Tests.Shared MyCompany.MyProject.Scripts This one is the one that makes the solution unique, 123, 124, 125 etc: MyCompany.MyProject.Tests.Unit123 Is it possible to set up a multi-project template that will generate this structure? References: MSDN Create Multi Project Templates

    Read the article

  • Should I share UI for objects that use common fields?

    - by wb
    I have a parent class that holds all of the fields that are common between all device types. From that, I have a few derived classes that each hold their unique fields. Say I have device type "Switch" and "Transformer". Both derived classes only have 2-3 of their own unique fields. When doing the UI design (windows forms) in this case. Should I create two separate forms for each device type or create a user control with all fields that are shared among all devices? Thank you.

    Read the article

  • Is it poor practice to identify objects via an enumeration property, instead of using GetType()?

    - by James
    I have a collection of objects that all implement one (custom) interface: IAuditEvent. Each object can be stored in a database and a unique numeric id is used for each object type. The method that stores the objects loops around a List<IAuditEvent>, so it needs to know the specific type of each object in order to store the correct numeric id. Is it poor practice to have an enumeration property on IAuditEvent so that each object can identify itself with a unique enumeration value? I can see that the simplest solution would be to write a method that translates a Type into an integer, but what if I need an enumeration of audit events for another purpose? Would it still be wrong to have my enumeration property on IAuditEvent?

    Read the article

  • SVN moving duplicate directory structures onto one another

    - by Nash0
    I have duplicate directory structures in two locations that I need to merge together in an svn repository. By "merge" I mean I want all files and folder that are unique to structure b to be moved into structure a. When I try to do this using svn move I get the error svn: Path 'com' already exists The folders look like this: src -> com -> (many more files and directories) -> java -> com -> (some files and folders, some folders overlap but all files are unique) src\com is a and src\java\com is b.

    Read the article

  • Implementing "select distinct ... from ..." over a list of Python dictionaries

    - by daveslab
    Hi folks, Here is my problem: I have a list of Python dictionaries of identical form, that are meant to represent the rows of a table in a database, something like this: [ {'ID': 1, 'NAME': 'Joe', 'CLASS': '8th', ... }, {'ID': 1, 'NAME': 'Joe', 'CLASS': '11th', ... }, ...] I have already written a function to get the unique values for a particular field in this list of dictionaries, which was trivial. That function implements something like: select distinct NAME from ... However, I want to be able to get the list of multiple unique fields, similar to: select distinct NAME, CLASS from ... Which I am finding to be non-trivial. Is there an algorithm or Python included function to help me with this quandry? Before you suggest loading the CSV files into a SQLite table or something similar, that is not an option for the environment I'm in, and trust me, that was my first thought.

    Read the article

  • What is the best practise for relational database tables in mysql?

    - by George
    Hi, I know, there is a lot of info on mysql out there. But I was not really able to find an answer to this specific and actually simple question: Let's say I have two tables: USERS (with many fields, e.g. name, street, email, etc.) and GROUPS (also with many fields) The relation is (I guess?) 1:n, that is ONE user can be a member of MANY groups. What I dis, is create another table, named USERS_GROUPS_REL. This table has only two fields: us_id (unique key of table USERS) and gr_id (unique key of table GROUPS) In PHP I do a query with join. Is this "best practice" or is there a better way? Thankful for any hint!

    Read the article

  • Extract rows for the first occurrence of a variable in a data frame

    - by user2614883
    I have a data frame with two variables, Date and Taxa and want to get the date for the first time each taxa occurs. There are 9 different dates and 40 different taxa in the data frame consisting of 172 rows, but my answer should only have 40 rows. Taxa is a factor and Date is a date. For example, my data frame (called 'species') is set up like this: Date Taxa 2013-07-12 A 2011-08-31 B 2012-09-06 C 2012-05-17 A 2013-07-12 C 2012-09-07 B and I would be looking for an answer like this: Date Taxa 2012-05-17 A 2011-08-31 B 2012-09-06 C I tried using: t.first <- species[unique(species$Taxa),] and it gave me the correct number of rows but there were Taxa repeated. If I just use unique(species$Taxa) it appears to give me the right answer, but then I don't know the date when it first occurred. Thanks for any help.

    Read the article

  • Goldtouch USB Keyboard reverses keystrokes in fast typing -- expected?

    - by Justin Grant
    I am running into an odd keyboard problem: some key combinations end up reversed (e.g. "pl" ends up being emitted as "lp") when I'm typing quickly. The problematic ones are the key combos I hit with two adjacent fingers on my right hand-- in other words, the combos I can hit the fastest. No idea how fast is "fastest", but I guess around 50-150 msecs gap between them. I'm trying to track down whether this represents a failed keyboard, an inherent limitation of my Goldtouch USB keyboards, or a software problem on my Windows 7 Lenovo T500. I use a PS/2 version of the same Goldtouch keyboard at home with no problems. I've tried another USB keyboard with my laptop and can't repro the problem. I've also used this keyboard on other laptops without a problem. According to this SU thread, USB keyboards have higher latency than PS/2 keyboards-- up to 30 msecs. I find it hard to imagine that I can type key combos faster than 50 msecs, probably more like 100-150. Anyone encountered this problem with this or another keyboard? If so, how did you fix it? Any idea if there's a "keyboard log" or some way to diagnose the problem inside Windows?

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • How can I trigger the creation of a new CLB file?

    - by Xperimental
    I'm currently having a problem with an application using COM running on Windows Vista. The application runs ok on one machine, but doesn't work on a similar configured machine. Both machines are virtual images originating from the same source image. While searching the registry for causes of this error, I came across the CLBVersion key in HKCR\CLSID which seems to have something to do with COM. The value of the key differs between the two machines (0x6 on the erroneous one, 0xc on the working one). Also there are files containing the same number in their filenames in the %SystemRoot\Registration directories of the machines. They are called R000000000006.clb and R00000000000c.clb respectively. I have already searched the windows event log for anything leading to the creation of those files (I have searched by the creation date of the files). Now a few questions regarding the registry keys and the files: Is it correct, that this is connected to COM? What is the function of the files? What causes the creation of a new "CLBVersion"? Is there a way for me to trigger the creation of a new CLB file? edit: I have now found out, that this has nothing to do with my application error. But I would still be interested in details about the registry key and the files. An installation of Visual Studio 2005 has brought the second machine to the same configuration (0xc in registry and file) as the other one.

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • How do I renew an expired Ubuntu OpenLDAP SSL Certificate

    - by Doug Symes
    We went through the steps of revoking an SSL Certificate used by our OpenLDAP server and renewing it but we are unable to start slapd. Here are the commands we used: openssl verify hostname_domain_com_cert.pem We got back that the certificate was expired but "OK" We revoked the certificate we'd been using: openssl ca -revoke /etc/ssl/certs/hostname_domain_com_cert.pem Revoking worked fine. We created the new Cert Request by passing it the key file as input: openssl req -new -key hostname_domain_com_key.pem -out newreq.pem We generated a new certificate using the newly created request file "newreq.pem" openssl ca -policy policy_anything -out newcert.pem -infiles newreq.pem We looked at our cn=config.ldif file and found the locations for the key and cert and placed the newly dated certificate in the needed path. Still we are unable to start slapd with: service slapd start We get this message: Starting OpenLDAP: slapd - failed. The operation failed but no output was produced. For hints on what went wrong please refer to the system's logfiles (e.g. /var/log/syslog) or try running the daemon in Debug mode like via "slapd -d 16383" (warning: this will create copious output). Below, you can find the command line options used by this script to run slapd. Do not forget to specify those options if you want to look to debugging output: slapd -h 'ldap:/// ldapi:/// ldaps:///' -g openldap -u openldap -F /etc/ldap/slapd.d/ Here is what we found in /var/log/syslog Oct 23 20:18:25 ldap1 slapd[2710]: @(#) $OpenLDAP: slapd 2.4.21 (Dec 19 2011 15:40:04) $#012#011buildd@allspice:/build/buildd/openldap-2.4.21/debian/build/servers/slapd Oct 23 20:18:25 ldap1 slapd[2710]: main: TLS init def ctx failed: -1 Oct 23 20:18:25 ldap1 slapd[2710]: slapd stopped. Oct 23 20:18:25 ldap1 slapd[2710]: connections_destroy: nothing to destroy. We are not sure what else to try. Any ideas?

    Read the article

  • How to use cURL to FTPS upload to SecureTransport (hint: SITE AUTH and client certificates)

    - by Seamus Abshere
    I'm trying to connect to SecureTransport 4.5.1 via FTPS using curl compiled with gnutls. You need to use --ftp-alternative-to-user "SITE AUTH" per http://curl.haxx.se/mail/lib-2006-07/0068.html Do you see anything wrong with my client certificates? I try with # mycert.crt -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- # mykey.pem -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- And it says "530 No client certificate presented": myuser@myserver ~ $ curl -v --ftp-ssl --cert mycert.crt --key mykey.pem --ftp-alternative-to-user "SITE AUTH" -T helloworld.txt ftp://ftp.example.com:9876/upload/ * About to connect() to ftp.example.com port 9876 (#0) * Trying 1.2.3.4... connected * Connected to ftp.example.com (1.2.3.4) port 9876 (#0) < 220 msn1 FTP server (SecureTransport 4.5.1) ready. > AUTH SSL < 334 SSLv23/TLSv1 * found 142 certificates in /etc/ssl/certs/ca-certificates.crt > USER anonymous < 331 Password required for anonymous. > PASS [email protected] < 530 Login incorrect. > SITE AUTH < 530 No client certificate presented. * Access denied: 530 * Closing connection #0 curl: (67) Access denied: 530 I also tried with a pk8 version... # openssl pkcs8 -in mykey.pem -topk8 -nocrypt > mykey.pk8 -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ...but got exactly the same result. What's the trick to sending a client certificate to SecureTransport?

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by Charlie Epps
    First: $ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa $ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys Connecting to SSH servers gives this message: $ ssh -vvv localhost OpenSSH_5.3p1, OpenSSL 0.9.8m 25 Feb 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /home/charlie/.ssh/identity type -1 debug1: identity file /home/charlie/.ssh/id_rsa type -1 debug3: Not a RSA1 key file /home/charlie/.ssh/id_dsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/charlie/.ssh/id_dsa type 2 ssh_exchange_identification: Connection closed by remote host My /etc/hosts.allow is as following: sshd: ALLOW /etc/hosts.deny is as following: ALL: ALL: DENY I have changed my /etc/ssh/sshd_conf as following: ListenAddress 0.0.0.0 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys PasswordAuthentication no

    Read the article

  • Is there a tool that can test what SSL/TLS cipher suites a particular website offers?

    - by Jeremy Powell
    Is there a tool that can test what SSL/TLS cipher suites a particular website offers? I've tried openssl, but if you examine the output: $ echo -n | openssl s_client -connect www.google.com:443 CONNECTED(00000003) depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA verify error:num=20:unable to get local issuer certificate verify return:0 --- Certificate chain 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com i:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA 1 s:/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA i:/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority --- Server certificate -----BEGIN CERTIFICATE----- MIIDITCCAoqgAwIBAgIQL9+89q6RUm0PmqPfQDQ+mjANBgkqhkiG9w0BAQUFADBM MQswCQYDVQQGEwJaQTElMCMGA1UEChMcVGhhd3RlIENvbnN1bHRpbmcgKFB0eSkg THRkLjEWMBQGA1UEAxMNVGhhd3RlIFNHQyBDQTAeFw0wOTEyMTgwMDAwMDBaFw0x MTEyMTgyMzU5NTlaMGgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlh MRYwFAYDVQQHFA1Nb3VudGFpbiBWaWV3MRMwEQYDVQQKFApHb29nbGUgSW5jMRcw FQYDVQQDFA53d3cuZ29vZ2xlLmNvbTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkC gYEA6PmGD5D6htffvXImttdEAoN4c9kCKO+IRTn7EOh8rqk41XXGOOsKFQebg+jN gtXj9xVoRaELGYW84u+E593y17iYwqG7tcFR39SDAqc9BkJb4SLD3muFXxzW2k6L 05vuuWciKh0R73mkszeK9P4Y/bz5RiNQl/Os/CRGK1w7t0UCAwEAAaOB5zCB5DAM BgNVHRMBAf8EAjAAMDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwudGhhd3Rl LmNvbS9UaGF3dGVTR0NDQS5jcmwwKAYDVR0lBCEwHwYIKwYBBQUHAwEGCCsGAQUF BwMCBglghkgBhvhCBAEwcgYIKwYBBQUHAQEEZjBkMCIGCCsGAQUFBzABhhZodHRw Oi8vb2NzcC50aGF3dGUuY29tMD4GCCsGAQUFBzAChjJodHRwOi8vd3d3LnRoYXd0 ZS5jb20vcmVwb3NpdG9yeS9UaGF3dGVfU0dDX0NBLmNydDANBgkqhkiG9w0BAQUF AAOBgQCfQ89bxFApsb/isJr/aiEdLRLDLE5a+RLizrmCUi3nHX4adpaQedEkUjh5 u2ONgJd8IyAPkU0Wueru9G2Jysa9zCRo1kNbzipYvzwY4OA8Ys+WAi0oR1A04Se6 z5nRUP8pJcA2NhUzUnC+MY+f6H/nEQyNv4SgQhqAibAxWEEHXw== -----END CERTIFICATE----- subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com issuer=/C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA --- No client certificate CA names sent --- SSL handshake has read 1777 bytes and written 316 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 748E2B5FEFF9EA065DA2F04A06FBF456502F3E64DF1B4FF054F54817C473270C Session-ID-ctx: Master-Key: C4284AE7D76421F782A822B3780FA9677A726A25E1258160CA30D346D65C5F4049DA3D10A41F3FA4816DD9606197FAE5 Key-Arg : None Start Time: 1266259321 Timeout : 300 (sec) Verify return code: 20 (unable to get local issuer certificate) --- it just shows that the cipher suite is something with AES256-SHA. I know I could grep through the hex dump of the conversation, but I was hoping for something a little more elegant. I would prefer Linux tools, but Windows (or other) would be fine. This question is motivated by the security testing I do for PCI and general penetration testing. Update: GregS points out below that the SSL server picks from the cipher suites of the client. So it seems I would need to test all cipher suites one at a time. I think I can hack something together, but is there a tool that does particularly this?

    Read the article

  • How to use most of memory available on MySQL

    - by Zilvinas
    I've got a MySQL server which has both InnoDB and MyISAM tables. InnoDB tablespace is quite small under 4 GB. MyISAM is big ~250 GB in total of which 50 GB is for indexes. Our server has 32 GB of RAM but it usually uses only ~8GB. Our key_buffer_size is only 2GB. But our key cache hit ratio is ~95%. I find it hard to believe.. Here's our key statistics: | Key_blocks_not_flushed | 1868 | | Key_blocks_unused | 109806 | | Key_blocks_used | 1714736 | | Key_read_requests | 19224818713 | | Key_reads | 60742294 | | Key_write_requests | 1607946768 | | Key_writes | 64788819 | key_cache_block_size is default at 1024. We have 52 GB's of index data and 2GB key cache is enough to get a 95% hit ratio. Is that possible? On the other side data set is 200GB and since MyISAM uses OS (Centos) caching I would expect it to use a lot more memory to cache accessed myisam data. But at this stage I see that key_buffer is completely used, our buffer pool size for innodb is 4gb and is also completely used that adds up to 6GB. Which means data is cached using just 1 GB? My question is how could I check where all the free memory could be used? How could I check if MyISAM hits OS cache for data reads instead of disk?

    Read the article

  • Bind9 zone files

    - by user42780
    Well for the better part of the last two hours I've tried to figure out what is actually wrong, but I can't seem to find anything obvious to me. What I'm trying to do is setup my DNS for say(per example) domain.com. This should include two NS records, namely ns1.domain.com and ns2.domain.com. With that there should be a mail record, as well as a CNAME record for www. I've been trough roughly 20 how to's in the last two hours, rewrote everything from scratch four times and I still can't seem to find whats wrong. My only suspicion to this might be two things; the error I get from the bind9 daemon when I stop the service, and the named.conf file. The error I get from the bind9 daemon when stopping the service is: * Stopping domain name service... bind9 rndc: connection to remote host closed This may indicate that * the remote server is using an older version of the command protocol, * this host is not authorized to connect, * the clocks are not syncronized, or * the key is invalid. I honestly doesn't know what this means, apart from the key defined in /etc/bind/rndc.key that's not in the named.conf file(yes, I did try to add it to no avail). Here's all the zone files, and configuration files; http://208.77.101.5/bind9/ If anyone could help, it would be greatly appreciated.

    Read the article

  • RSA keys - virtual hosts

    - by Bosworth99
    Pardon my noobness, but I just got started with VPS (linux) hosting; setting up passwordless ssh for multiple users has proved to be kind of a pain. Currently I'm the single user of this ubuntu 10.04 LTS VPS (linode.com). I was able to establish a single rsa passkey under my home/user/.ssh/authorized_keys location. Fine. PuTTy works as expected, and Filezilla (sftp) links up as required. I've been working on a single site that this user owns, and thats not been a problem. Now, I want to set up some other sites, and I've chosen Webmin with the VirtualMin plugin to make this work. I made another user (or, rather, virtualmin did), but I've been unable to get FileZilla to link up to this new user. Could anyone with experience here explain what the setup is supposed to look like? IE - can I use a single rsa key pair for all accounts (if, for example, I give ownership of files to the original user?). Or is it standard practice to create a separate key pair for each user, and establish a separate putty/filezilla login for each? I've spent enough time dinking around with this to be frustrated. "Sever rejected the provided key" error sucks after the fifth hour. I'm about to set up an ftp server and call it a day. Any thoughts would be most welcome -

    Read the article

  • Need help tuning Mysql and linux server

    - by Newtonx
    We have multi-user application (like MailChimp,Constant Contact) . Each of our customers has it's own contact's list (from 5 to 100.000 contacts). Everything is stored in one BIG database (currently 25G). Since we released our product we have the following data history. 5 years of data history : - users/customers (200+) - contacts (40 million records) - campaigns - campaign_deliveries (73.843.764 records) - campaign_queue ( 8 millions currently ) As we get more users and table records increase our system/web app is getting slower and slower . Some queries takes too long to execute . SCHEMA Table contacts --------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------+------------------+------+-----+---------+----------------+ | contact_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | client_id | int(10) unsigned | YES | | NULL | | | name | varchar(60) | YES | | NULL | | | mail | varchar(60) | YES | MUL | NULL | | | verified | int(1) | YES | | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_created | date | YES | MUL | NULL | | | geolocation | varchar(100) | YES | | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------------+------------------+------+-----+---------+----------------+ Table campaign_deliveries +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | contact_id | int(10) unsigned | NO | MUL | 0 | | | sent_date | date | YES | MUL | NULL | | | sent_time | time | YES | MUL | NULL | | | smtp_server | varchar(20) | YES | | NULL | | | owner | int(5) | YES | MUL | NULL | | | ip | varchar(20) | YES | MUL | NULL | | +---------------+------------------+------+-----+---------+----------------+ Table campaign_queue +---------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+------------------+------+-----+---------+----------------+ | queue_id | int(10) unsigned | NO | PRI | NULL | auto_increment | | newsletter_id | int(10) unsigned | NO | MUL | 0 | | | owner | int(10) unsigned | NO | MUL | 0 | | | date_to_send | date | YES | | NULL | | | contact_id | int(11) | NO | MUL | NULL | | | date_created | date | YES | | NULL | | +---------------+------------------+------+-----+---------+----------------+ Slow queries LOG -------------------------------------------- Query_time: 350 Lock_time: 1 Rows_sent: 1 Rows_examined: 971004 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 70 AND contacts.verified = 1); Query_time: 235 Lock_time: 1 Rows_sent: 1 Rows_examined: 4455209 SELECT COUNT(*) as total FROM contacts WHERE (contacts.owner = 2); How can we optimize it ? Queries should take no more than 30 secs to execute? Can we optimize it and keep all data in one BIG database or should we change app's structure and set one single database to each user ? Thanks

    Read the article

  • Reinserted a RAID disk. Defined as foreign. Is import or clear the correct choice?

    - by Petrus
    I have re-inserted a RAID disk, on a DELL server with Windows Server 2008. The drive-status indicator was changing between a green and amber light, and the monitor gave the following message: There are offline or missing virtual drives with preserved cache. Please check the cables and ensure that all drives are present. Press any key to enter the configuration utility. I pressed a key and the PERC 6/I Integrated BIOS Configuration Utility showed that the RAID Status for that disk was Offline. After reinsertion of the disk the monitor is giving the following message: Foreign configuration(s) found on adapter. Press any key to continue or ‘C’ load the configuration utility, or press ‘F’ to import foreign configuration(s) and continue. After checking around on the net I am uncertain if I should choose import or clear. I cannot find out if an import means importing information from the array/system to the now foreign disk or the other way, i.e. importing information from the foreign disk to the array/system that was actually working fine. Also; if clear is a necessary thing to do ahead of a rebuild of that disk, or if clear means to clear the system to somehow make it ready to import the information from the foreign disk to the array/system, which is not what I want. I imagine that making the wrong choice here might be fatal. Please help clearing this out by telling what to choose and why.

    Read the article

  • GNOME 2 + Compiz equivalent?

    - by virtualeyes
    Running Fedora 14 and realize I need to either change distros or find an alternative to GNOME 3 in Fedora 17. Based on what I have read to-date, XFCE and KDE are the go-to WMs if I want to avoid GNOME 3. I tried KDE 4 and I wasn't impressed; I like the simplicity of GNOME 2 with Compiz and Emerald. Can't stay on Fedora 14 forever, however, so...where to turn? Basically looking for these features in my desktop environment: GNOME Do or equivalent Snap to grid/Window tiling A must-have, the ability to hot key focused window to a monitor grid region is a huge productivity win. Zoom window to cursor In a multi-monitor setup sometimes it's nice to, say, GNOME Do terminal in one monitor and then hot key the opened window to the other monitor just by zipping the mouse cursor anywhere on target monitor (followed by, of course, snap-to-grid hotkey, all without a single mouse click) Polarization At night white background hurts the eyes, so I prefer to hot key polarize to black. Multi-monitor support I'm partial to Fedora given that I've worked with CentOS for years and have little experience with any other Linux distro; however, if the difference between Fedora and Arch, Mint, etc. is fairly subtle, I'll make the leap, just need a distro & desktop environment that allows me to be productive with keyboard hot keys and provides the above basic features. Any suggestions?

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Configuring WPA WiFi in Ubuntu 10.10

    - by sma
    I am trying to configure my wireless network on my laptop running Ubuntu 10.10 and am having a bit of difficulty. I am a complete Linux newb, but want to learn it, hence the reason I'm trying to set this up. Here's the vitals: It is a Gateway 600 YG2 laptop. It was previously running Windows XP, but I installed Ubuntu 10.10 in place of it (not a dual boot, I removed XP altogether). I have an old wireless card that I'm trying to resurrect. I haven't really used the card in a couple years, but it seems to still work, I just can't connect to my home's wireless network. The card is a Linksys WPC11 v2.5. When I plug it in, Ubuntu recognizes the network, but won't connect to it. My home network uses WPA encryption and the only connection type that Ubuntu's network manager is giving me is WEP and then it asks for a key -- I have no idea what that key should be. So, basically, I'm asking, is there a way I can instead connect through WPA? I've tried creating a new connection in network manager, but that won't work, it keeps falling back to the WEP connection and asking me for a key. I have tried to install the XP driver using ndiswrapper but I don't know if that's working or not. Is there a way to tell if: A) the card is working as it should B) the correct drivers are installed (again, I installed the XP one using ndiswrapper NET8180.INF, but I'm not sure what to do next) Any help would be appreciated. Thank you.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >