Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 935/1330 | < Previous Page | 931 932 933 934 935 936 937 938 939 940 941 942  | Next Page >

  • heimdal kerberos in openldap issue

    - by Brian
    I think I posted this on the wrong 'sister site', so here it is. I'm having a bit of trouble getting Kerberos (Heimdal version) to work nicely with OpenLDAP. The kerberos database is being stored in LDAP itself. The KDC uses SASL EXTERNAL authentication as root to access the container ou. I created the database in LDAP fine using kadmin -l, but it won't let me use kadmin without the -l flag: root@rds0:~# kadmin -l kadmin> list * krbtgt/REALM kadmin/changepw kadmin/admin changepw/kerberos kadmin/hprop WELLKNOWN/ANONYMOUS WELLKNOWN/org.h5l.fast-cookie@WELLKNOWN:ORG.H5L default brian.empson brian.empson/admin host/rds0.example.net ldap/rds0.example.net host/localhost kadmin> exit root@rds0:~# kadmin kadmin> list * brian.empson/admin@REALM's Password: <----- With right password kadmin: kadm5_get_principals: Key table entry not found kadmin> list * brian.empson/admin@REALM's Password: <------ With wrong password kadmin: kadm5_get_principals: Already tried ENC-TS-info, looping kadmin> I can get tickets without a problem: root@rds0:~# klist Credentials cache: FILE:/tmp/krb5cc_0 Principal: brian.empson@REALM Issued Expires Principal Nov 11 14:14:40 2012 Nov 12 00:14:37 2012 krbtgt/REALM@REALM Nov 11 14:40:35 2012 Nov 12 00:14:37 2012 ldap/rds0.example.net@REALM But I can't seem to change my own password without kadmin -l: root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Right password New password: Verify password - New password: Auth error : Authentication failed root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Wrong password kpasswd: krb5_get_init_creds: Already tried ENC-TS-info, looping kadmin's logs are not helpful at all: 2012-11-11T13:48:33 krb5_recvauth: Key table entry not found 2012-11-11T13:51:18 krb5_recvauth: Key table entry not found 2012-11-11T13:53:02 krb5_recvauth: Key table entry not found 2012-11-11T14:16:34 krb5_recvauth: Key table entry not found 2012-11-11T14:20:24 krb5_recvauth: Key table entry not found 2012-11-11T14:20:44 krb5_recvauth: Key table entry not found 2012-11-11T14:21:29 krb5_recvauth: Key table entry not found 2012-11-11T14:21:46 krb5_recvauth: Key table entry not found 2012-11-11T14:23:09 krb5_recvauth: Key table entry not found 2012-11-11T14:45:39 krb5_recvauth: Key table entry not found The KDC reports that both accounts succeed in authenticating: 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:48:03 sending 294 bytes to IPv4:192.168.72.10 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 ENC-TS Pre-authentication succeeded -- brian.empson@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 ENC-TS pre-authentication succeeded -- brian.empson@REALM 2012-11-11T14:48:03 AS-REQ authtime: 2012-11-11T14:48:03 starttime: unset endtime: 2012-11-11T14:53:00 renew till: unset 2012-11-11T14:48:03 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 sending 704 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:45:39 sending 303 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 ENC-TS Pre-authentication succeeded -- brian.empson/admin@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 ENC-TS pre-authentication succeeded -- brian.empson/admin@REALM 2012-11-11T14:45:39 AS-REQ authtime: 2012-11-11T14:45:39 starttime: unset endtime: 2012-11-11T15:45:39 renew till: unset 2012-11-11T14:45:39 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 sending 717 bytes to IPv4:192.168.72.10 I wish I had more detailed logging messages, running kadmind in debug mode seems to almost work but it just kicks me back to the shell when I type in the correct password. GSSAPI via LDAP doesn't work either, but I suspect it's because some parts of kerberos aren't working either: root@rds0:~# ldapsearch -Y GSSAPI -H ldaps:/// -b "o=mybase" o=mybase SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () root@rds0:~# ldapsearch -Y EXTERNAL -H ldapi:/// -b "o=mybase" o=mybase SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 # extended LDIF <snip> Would anyone be able to point me in the right direction?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Update RDS db via mysqlbinlog: "you need (at least one of) the SUPER privilege(s)"

    - by timoxley
    We are moving a production site to EC2/RDS Followed these instructions: http://geehwan.posterous.com/moving-a-production-mysql-database-to-amazon I have set up row-based binary logging on the production server did a: mysqldump --single-transaction --master-data=2 -C -q -u root -p backup.sql then imported to RDS instance. No dramas. Due to the size of the db, and minimal downtime requirements, I've got to update the ec2 db to the latest datas via the binlogs, and it won't let me. mysqlbinlog mysql-bin.000004 --start-position=360812488 | mysql -uroot -p -h and it says: ERROR 1227 (42000) at line 6: Access denied; you need (at least one of) the SUPER privilege(s) for this operation My guess, based on what is on line 6 of the binlog, is that it's the 'write to the BINLOG' statements in the SQL backup, and because RDS doesn't support this, it can't run these statements, or something, I don't really know. Please help.

    Read the article

  • Unable to set nginx to serve my staging website

    - by user100778
    I'm having some troubles setting up nginx to serve my staging website. What I did is change the server_name but for some reasons it just doesn't work. The url scheme is "domain.foo" is production, "staging.domain.foo" is staging, "foobar.domain.foo" is a web service, "foobar.staging.domain.foo" is the staging version of the same webserver, ".domain.foo" is routed to serve some s3 static HTML, ".staging.domain.foo" is routed to serve some s3 static HTML in another bucket. All production urls work and are correctly configured, all staging urls doesn't work. Here is my conf file. You will see some duplication, I will gladly accept any correction/optimization, I'm a coder and configuring servers is definitely not my thing (but I'm eager to learn and improve...). server { listen 80; ## listen for ipv4 server_name "domain.foo" "www.domain.foo" default_server; access_log /var/log/nginx/access.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; location ~* \.(jpg|jpeg|gif|png|ico|css|bmp|js|html)$ { access_log off; expires max; root /home/foo/Foo/current/public; break; } if ($host ~ 'www.domain.foo') { rewrite ^/(.*)$ http://domain/foo/$1 permanent; } proxy_pass http://production; break; } } server { listen 80; server_name "staging.domain.foo"; access_log /var/log/nginx/access.staging.log; error_log /var/log/nginx/error.staging.log; client_max_body_size 5M; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://staging; break; } } server { listen 80; ## listen for ipv4 server_name "foobar.domain.foo"; access_log /var/log/nginx/access.log; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if ($host = 'foobar.domain.foo') { proxy_pass http://foobar; break; } } } server { listen 80; ## listen for ipv4 server_name foobar.staging.domain.foo; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://foobar_staging; break; } } server { listen 80; server_name "~^(.+)\.domain\.foo$"; location / { proxy_intercept_errors on; error_page 404 = http://domain.foo/404; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.domain.foo.s3.amazonaws.com; } } server { listen 80; server_name "~^(.+)\.staging\.domain\.foo$"; location / { proxy_intercept_errors on; set $subdomain $1; rewrite /$ "/$subdomain/index.html" break; rewrite ^ /$subdomain$request_uri? break; proxy_pass http://bucket.staging.domain.foo.s3.amazonaws.com; } } upstream production { server 111.255.111.110:8000; server 111.255.111.110:8001; server 111.255.111.110:8002; server 111.255.111.110:8003; } upstream staging { server 222.255.222.222:8000; server 222.255.222.222:8001; } upstream foobar { server 111.255.222.165:9000; server 111.255.222.165:9001; server 111.255.222.165:9002; } upstream foobar_staging { server 222.255.222.222:9000; } What happens now when I point my browser to staging.domain.foo is that it hangs. Can't find anything in the logs, but for example the access.staging.log and errors.staging.log are created. Anybody has an idea? :)

    Read the article

  • scalability: when to use CDN?

    - by ajsie
    i've read about CDN but dont know exactly what it is for. lets say i've got an international social network (text and image content), and it's growing in traffic from different countries, do i have use of CDN? the picture i got from the sources i've read is that it copy your content and put it in many servers spread out over the world so that users will fetch it from the nearest point. does this mean that every server has a copy of my mysql database and the image files? is this the proper way to make your web service available for the world? cause how else could you set up a servers through out the world, contacting hosting companies for each country?

    Read the article

  • SQL Server 2008 to Sybase Linked Server (x64) -- Provider and permissions issues

    - by Cory Larson
    Good morning, We're testing a new SQL Server 2008 setup (64-bit) and one of our requirements was to get a linked server up and talking to a Sybase database. We've successfully done so using Sybase's 64-bit 15.5 drivers, however I can't expand the catalog list from a remote machine (connecting to the '08 box with SSMS) without having my network account being added as an Administrator on the actual box and then using Windows Authentication to connect to the server instance. This is going to be problematic when we go live. Has anybody experienced this, or have any input on the permissions in SQL Server 2008 with regards to linked servers? If I remove my network account from the Administrators group, the big error I'm getting is a 'Msg 7302, Level 16, State 1, Line 41' with a description something like "Cannot create an instance of OLE DB provider "ASEOLEDB" for linked server "", and all research points to permissions issues. Thoughts? This document talks about DCOM configuration and permissions, but we've tried all of it with no luck. Thanks

    Read the article

  • Install "Massive Coupon"

    - by ffffff
    I'want to install "Massive Coupon" http://github.com/robstyles/Massive-Coupon---Open-source-groupon-clone I've set up apache2 + mod_wsgi + mysql on Ubuntu 9 And written the following settings.py # Django settings for massivecoupon project. import socket, os . . DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'. DATABASE_NAME = 'grouponpy' # Or path to database file if using sqlite3. DATABASE_USER = 'grouponpy' # Not used with sqlite3. DATABASE_PASSWORD = 'password' # Not used with sqlite3. DATABASE_HOST = 'localhost' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. What I have to do additional then?

    Read the article

  • Automating an SSRS 2008 R2 Report Snapshots and run report with most recent data

    - by Mr Shoubs
    I would like to automate a report snapshot, but there is only an option to take a snapshot in the Report History Tab. All the resources I've found suggest I need to go to processing options and select "Render this report from a snapshot". But I don't want to do that - when I go to a report, I want to get the most recent data. However daily at midnight I'd like to take a snapshot and store it in the history in case I want to compare the reports as of midnight for the last few weeks. Or am I doing this wrong and have to create a subscription instead? Note: this is for an auditing database and has way to much data in to query a range with more than 1 day in it - reports are restricted as such. (1 day has over 1 million rows on it's own).

    Read the article

  • Resolving “ssl handshake failure” error in PostgresQL

    - by Mitch
    I would like to connect to my Postgres 8.3 database using SSL from my XP client using OpenSSL. This works fine without SSL. When I try it with SSL (no client certificate), I get the error: error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure I have followed the instructions in the Postgres manual for SSL including creating a self-signed certificate. In my pg_hba.conf there is a line: host dbname loginname 123.45.67.89/32 md5 The version of OpenSSL on the server is 0.9.8g and on the client is 0.9.8j. I'd appreciate any suggestions for tracking down the problem. Edit: The uncommented lines from postgresql.conf are: data_directory = '/var/ebs0/postgres/main' hba_file = '/etc/postgresql/8.3/main/pg_hba.conf' ident_file = '/etc/postgresql/8.3/main/pg_ident.conf' external_pid_file = '/var/run/postgresql/8.3-main.pid' listen_addresses = '*' port = 5432 max_connections = 100 unix_socket_directory = '/var/run/postgresql' ssl = true shared_buffers = 24MB

    Read the article

  • Install RT Failed: DateTime >= 0.44 ...MISSING

    - by javano
    I am trying to install RT-4.0.5 (Request Tracker) but I keep getting the following output; $ make fixdeps <output cut> SOME DEPENDENCIES WERE MISSING. CORE missing dependencies: DateTime >= 0.44 ...MISSING make: *** [fixdeps] Error 1 The full output is here (it's quite long); http://pastebin.com/raw.php?i=Tn7GrkYw $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 8.04.4 LTS Release: 8.04 Codename: hardy $ perl --version This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux $ cpan --version /usr/local/bin/cpan version 1.57 calling Getopt::Std::getopts (version 1.06 [paranoid]), running under Perl version 5.14.2. [Now continuing due to backward compatibility and excessive paranoia. See ``perldoc Getopt::Std'' about $Getopt::Std::STANDARD_HELP_VERSION.] Nothing to install! I can't see why this is a problem; $ cpan DateTime Going to read '/root/.cpan/Metadata' Database was generated on Thu, 08 Mar 2012 16:11:26 GMT DateTime is up to date (0.72).

    Read the article

  • Benchmarking MySQL on win7

    - by Patrick
    I've setup a nginx server running php 5.3.6 and mysql 5.5.1.3. My computer is an AMD quadcore 9650, 4gb ram, 500gb 7200rpm HD. I ran the PHP MySQL Benchmark Tool v. 0.1, and got the following results: Testing a(n) MYISAM table using 100000 rows. Successfully created database speedtestdb Sucessfully created table speedtesttable Table Type Verified: MYISAM .. Done. 100000 inserts in 19.73628 seconds or 5067 inserts per second. Done. 100000 row reads in 0.2801 seconds or 357015 row reads per second. Done. 100000 updates in 4.03876 seconds or 24760 updates per second. I'm wondering where this stands as far as performance goes, and what are some steps I can take if any to improve on this. I'm not trying to make anything fantastic, just getting a feel for how to best optimize a web server in this configuration.

    Read the article

  • Corporate Wiki Organization - Technical Documentation

    - by Dave Jarvis
    Corporations have documents describing various aspects of their technical systems, including: Custom Applications Custom Development Frameworks Third Party Applications Accounting Bug Tracking Network Management How To Guides User Manuals Web Browsers Software Tools Development IDEs Graphics GIMP xv Text Editing File Transfer ncFTP WinSCP Hardware Servers Web Database Exchange File Network Devices Printers Drawings If you had to use a Wiki to manage the documentation, what other items would you add to the list, and how would you organize it? (For example, would Software Tools make more sense under Third Party Applications?) A few constraints: The structure should not go beyond three levels deep. Avoid the word "and" in favour of two different categories. Keep the structure general: it should appy as broadly as possible. Target audience is primarily technical, but could be visible by anyone.

    Read the article

  • Directory name for non-generic Proprietary stuff

    - by George Bailey
    Is there a common or standard directory name for the company-specific stuff that exists in a server? This would include any crons, scripts, webserver docroots, programs, non-database storage areas, service codebases, etc. We could of course put crons in /etc/cron.d, put docroots in /home/webservd, scripts in one of the bin directories, but that would be messy. If XYZ Technology Corp wanted to have all the non-generic stuff in one place, would they make a directory /xyz or /home/xyz or is there an alternative directory name that is not company-specific, but intended for company-specific stuff? What is most common?

    Read the article

  • AWS RDS connection count

    - by wmarbut
    I am using AWS RDS with MySQL for a project and have a "large" instance. The documentation is clear on what this means as far as compute resources and RAM goes, but I can't find anything that documents how many open database connections that I can have. The app that I am using is PHP and it utilizes PDO with persistent connections. This means that the number of open connections could reach the maximum number of PHP child processes running at any given point. How do I ensure that my RDS instance has a max connections setting high enough to be comfortable with this?

    Read the article

  • Can't resolve localhost on Mac OS X Server

    - by iainbeeston
    I have a server running OS X Server 10.5 and it can't resolve localhost to 127.0.0.1. When I try ping this is what happens: ping localhost ping: cannot resolve localhost: Unknown host SSH and web browsers get similar results (uknown host). If I try using 127.0.0.1 or the ip address assigned on the LAN all of the above work. Here's the contents of my /etc/hosts file: cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost I have no local DNS service running. Does anyone have any idea why this might be happening or how I can fix it?

    Read the article

  • re-enabling a table for mysql replication

    - by jessieE
    We were able to setup mysql master-slave replication with the following version on both master/slave: mysqld Ver 5.5.28-29.1-log for Linux on x86_64 (Percona Server (GPL), Release 29.1) One day, we noticed that replication has stopped, we tried skipping over the entries that caused the replication errors. The errors persisted so we decided to skip replication for the 4 problematic tables. The slave has now caught up with the master except for the 4 tables. What is the best way to enable replication again for the 4 tables? This is what I have in mind but I don't know if it will work: 1) Modify slave config to enable replication again for the 4 tables 2) stop slave replication 3) for each of the 4 tables, use pt-table-sync --execute --verbose --print --sync-to-master h=localhost,D=mydb,t=mytable 4) restart slave database to reload replication configuration 5) start slave replication

    Read the article

  • nTop RRD file architecture

    - by Seanny123
    I have a gig of nTop RRD files and I would like to start graphing them with rrdtool (but not with nTop, since I'm hoping to do this with a separate backup of the database as workaround to the impossibility of limiting the RRD files by size), but I don't know how the files are structured. I've tried reading the RRD documentation from SourceForge and the nTop FAQ, but I'm not finding the information I need. Does anyone know of any documentation I should be looking at or how the files are structured? Here https://dl.dropbox.com/u/669437/file%20structure.png is a screenshot of the file structure. At first I thought it was organized by IP address (so the rrd files for address 1.1.2.3 would be stored in folder 1-1-2-3 or even the reverse order), but that doesn't seem to be the case. It isn't organized by MAC address either, although some hosts are saved that way. Any help would be appreciated.

    Read the article

  • setting ulimit and ubuntu 8.04

    - by Wizzard
    Hi there, We have two ubuntu 8.04 servers. With the database server I set the table_cache to 1000 however when I restart mysql the status only shows 257 and the open files limit says 1024 I adjusted ulimit by doing ulimit -n 8192 and then restarting mysql; this seemed to do the tick however after a few hours I did ulimit -n and saw it had returned back to 1024 Bit of a worry. I edited the /etc/security/limits.conf and added mysql soft nofile 8192 mysql hard nofile 8192 then rebooted, no change. I then edited and change mysql to * rebooted, no change I then edited and changed it to one line * - nofile 8192 and rebooted, no change. cat /proc/sys/fs/file-max gives me 768730 sysctl fs.file-max gives me fs.file-max = 768730 I am at a bit of a loss to how I can set and keep the ulimit value set so I can increase the table cache properly on mysql.

    Read the article

  • Issue with resetting auto increment from default to big number

    - by Sai Srikanth
    I have a MySQL table naming Invoice for a Inventory Monitoring site, invoice_number is bigint(19) AUTO_INCREMENT field. Currently AUTO_INCREMENT value is 1. Client want it to start the invoice_number from 50000. With the following script reset the ALTER TABLE INVOICES AUTO_INCREMENT = 50000; When I wrote an Insert Script to insert data in SQLDBX, it is putting the invoice_number from 50000. But when i am trying to do insert a record using the application(web application), the invoice_number value is starting from 1. We are making use of Spring-JDBC template to insert data into mysql database.

    Read the article

  • Transition domain to new web host without waiting for DNS propagation

    - by jcmoney
    I was considering switching to Amazon EC2 to host my website to handle more traffic. It seems like I would have to update DNS records to point to the new server but I was wondering if there was a way to avoid having to wait for the new DNS record to propagate. Putting the code on both hosts would not work for me since the app writes to a database pretty frequently. I thought about just using a meta redirect or php redirect on the old host to redirect to the new host ip but was wondering if there's a better more accepted way of doing this.

    Read the article

  • Windows server 2008 issue

    - by Matt Fitz
    We have 2 domains “pdc1” and “devkc” both are windows 2000 Active Directory domains with a 2-way trust relationship in place., has been this way for years. All of our developer machines are joined to the “devkc” domain but the users log into there accounts on the “pdc1” domain. This all works fine with Windows XP, 2000 and 2003 server. However with Windows Server 2008 the users can only log into the “devkc” domain that the machine is joined to, they can not log into the “pdc1” domain. The following error results: "The security database on this server does not have a computer account for this workstation trust relationship” Any ideas would be greatly appreaciated Thanks Matt Fitz

    Read the article

  • Intermittent Connection Issues to SQL Server 2008 R2 RTM

    - by Chandan Jha
    The problem I am facing is a very complex one and inspite of trying to gather a root cause of the problem, I am standing at the same place after 2 months with just bits and pieces of information.Here is a scenario: There is a windows 2003 server which uses an system DSN ODBC connection. I looked into the driver properties and it is as follows: Name Version File SQL Server 2000.86.3959.00 SQLSRV32.DLL Now, this system DSN has been given configured with TCP\IP in Network Libraries and 'determine port dynamically' is checked. Now, lets come to the database destination. It is hosted on Windows 2008 having SQL 2008 R2 RTM version 64-bit. Now, I will give you a an overview about the events that happen and whatever troubleshooting I could perform: I get an email saying 'blah blah' failed and the only message their application gets is 'cannot connect to database' I go the SQL Server logs and find the following information: Login failed for user ''. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.0.0.xx Error: 18456, Severity: 14, State: 58.] A quick search shows that this error may come when an SQL Server is configured with windows authentication but its not true. We have mixed mode and connection issue is intermittent. This SQL Server is configured to run on a local system account but since we use only SQL Server accounts to connect to this, there should not be any Kerberos errors. When I run a profiler trace and see only 'existing connections', i see a lot of them coming from my client server displaying the sql user but NO hostname is shown. Textdata field shows TCP\IP information along with some arithabort and ansi-null settings. Now, I tried looking into ring connectivity buffer by using following: SELECTCAST(record AS XML) FROM sys.dm_os_ring_buffers WHERE ring_buffer_type = 'RING_BUFFER_CONNECTIVITY' One sample output is: <ConnectivityTraceRecord> <RecordType>Error</RecordType> <RecordSource>Tds</RecordSource> <Spid>118</Spid> <SniConnectionId>5124905D-D1EC-460E-AD78-201050B78C67</SniConnectionId> <OSError>0</OSError> <SniConsumerError>18452</SniConsumerError> <SniProvider>7</SniProvider> <State>1</State> <RemoteHost>10.0.0.21</RemoteHost> <RemotePort>5008</RemotePort> <LocalHost>10.1.0.38</LocalHost> <LocalPort>1433</LocalPort> <RecordTime>6/6/2012 21:14:57.527</RecordTime> <TdsBuffersInformation> <TdsInputBufferError>0</TdsInputBufferError> <TdsOutputBufferError>0</TdsOutputBufferError> <TdsInputBufferBytes>120</TdsInputBufferBytes> </TdsBuffersInformation> <TdsDisconnectFlags> <PhysicalConnectionIsKilled>0</PhysicalConnectionIsKilled> <DisconnectDueToReadError>0</DisconnectDueToReadError> <NetworkErrorFoundInInputStream>0</NetworkErrorFoundInInputStream> <ErrorFoundBeforeLogin>0</ErrorFoundBeforeLogin> <SessionIsKilled>0</SessionIsKilled> <NormalDisconnect>0</NormalDisconnect> </TdsDisconnectFlags> </ConnectivityTraceRecord> <Stack> <frame id="0">0X000000000174C34B</frame> <frame id="1">0X0000000001748FDD</frame> <frame id="2">0X0000000002461001</frame> <frame id="3">0X0000000000C47E98</frame> <frame id="4">0X00000000008015AD</frame> <frame id="5">0X0000000000801492</frame> <frame id="6">0X00000000003CBBD8</frame> <frame id="7">0X00000000003CB8BA</frame> <frame id="8">0X00000000003CB6FF</frame> <frame id="9">0X00000000008E8FB6</frame> <frame id="10">0X00000000008E9175</frame> <frame id="11">0X00000000008E9839</frame> <frame id="12">0X00000000008E9502</frame> <frame id="13">0X0000000074E437D7</frame> <frame id="14">0X0000000074E43894</frame> <frame id="15">0X00000000775A652D</frame> Somehow all the errors show error number 18452 whereas I never found this error in my SQL logs where I see only 18456. I am just stuck on a dead end because this connection issue appears intermittently. Sorry for a long question but I hope if you read this, you can make out that I tried a lot at my end before giving up.

    Read the article

  • mappoint 2013 randomly crashes on import

    - by ErocM
    We are sending routes to Mappoint 2013 from our application using an access database. It seems to happen with Mappoint 2010 and 2011 also. It doesn't happen on all of our clients either and it happens randomly on those who it does happen. This is the message: Problem signature: Problem Event Name: BEX Application Name: MapPoint.exe Application Version: 19.0.18.1100 Application Timestamp: 4fd664bb Fault Module Name: StackHash_94b0 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 7f82c94f Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.18.10 Locale ID: 1033 Additional Information 1: 94b0 Additional Information 2: 30950b6006304277980cdff17dfbd104 Additional Information 3: 098a Additional Information 4: 31c80150ac0b74b2dcb7884aa8fa1dac Does anyone know where I'd find out more information on this or how to resolve it? If this is not the correct exchange, pls point me to the right one and I'll delete and respost it. Thanks!

    Read the article

  • General purpose ticketing/tech support system [closed]

    - by crazybyte
    Possible Duplicate: What’s your favorite ticketing system? I was wondering if somebody could recommend me a very user friendly or simple general purpose ticketing/tech support system. I need something that is web based, preferably open-sourced/free software implemented using PHP, Ruby, Ruby on Rails or Java (as back end) with MySQL or PostgreSQL as database engine. I need something that is not development management oriented or project management oriented like Eventum or similar (random example), something to which the user can connect open a tech support request and be able to follow it until is solved or dropped.I need it to be open-sourced to be able to modify it if there is a need or extend it. I tried a number of such systems available and I found that osTicket or eTicket is something that it's close to what I need, but the code is somewhat flaky and some of the features are working badly or behaving strangely. Any thoughts/advice where to find something similar? Thanks!

    Read the article

< Previous Page | 931 932 933 934 935 936 937 938 939 940 941 942  | Next Page >