Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 1126/1300 | < Previous Page | 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133  | Next Page >

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • SQL 2005 AD Group permission levels

    - by jj.
    I'm trying to give permissions to a (sql 2005) database app based on AD groups. The general idea is to require a user to have a membership to "app_users" to view anything, and membership to other groups gives them write access to that group. "app_customers" gives write access to the customers module, "app_sales" to sales, etc. I've listed an example below: user1: AD member of app_users user2: AD member of app_users, app_customers For dbo.customers table: app_users - Granted: Select permission - Denied: Insert, Update, Delete app_customers - Granted: Select permission - Granted: Insert, Update, Delete I would expect user1 to be able to view the dbo.customers table, but will not be allowed to modify anything (insert/update/delete) - which works. In the same vein, I would expect user2 to be able to view AND modify the dbo.customers table, since they are a member of app_customers. However, this is not the case. Instead, user2 is denied any modifications just like user1. I seem to remember something about deny permissions winning if there was a conflict, but it's honestly been too long since I've dealt with them. Am I going about this the right way? Thanks for your time!

    Read the article

  • Exchange 2010: Replication Service Still Trying to Replicate Deleted Mailbox Store

    - by ThaKidd
    In advance, thank you for your opinions! I just migrated from Server/Exchange 2003 to Server 2008 SR2 running Exchange 2010. I had an extra mailbox that appeared with some system mailboxes in it. I used the EMS to move those mailboxes over and then deleted the store out of the EMC. Since then every so often I get an Error in Event Viewer. Source: MSExchangeRepl ID: 4098 Error: The Microsoft Exchange Replication service couldn't find a valid configuration for database '5f012f40-3bad-4003-a373-dbc0ffb6736f' on server 'EXCHSERVER'. Error: (nothing after this) I can confirm that the above GUID is the mailbox store of that I deleted. No other Exchange errors occur. How can I tell Exchange Replication to ignore this store? Setup, one Exchange server 2003 transitioned over to 2010. No other Exchange servers. Is there a way to fix this? Do I need to change a setting to stop replication? I plan to add a second Exchange server in the next few days so stopping replication would be a bad thing. Thanks again in advance. Jason

    Read the article

  • Mediawiki extension error

    - by vinylguitar
    I'm running the latest version of mediawiki using MoWeS Portable II from my desktop. I just installed this extension on the wiki http://www.mediawiki.org/wiki/Extension:MsUpload It adds an option to upload files (to be embedded in an article) to the edit screen of an article. After installing it when I try and edit an article I get the following error: Fatal error: Call to undefined method OutputPage::addModules() in C:\Users\User\Desktop\knowledge mapedia 10 25 13 copy\mowes_portable\www\mediawiki\extensions\MsUpload\msupload.php on line 65 Also here is what I posted in the localsettings.php file (I put it in at the end of localsettings.php if it makes a difference): Start --------------------------------------- MsUpload $wgMSU_ShowAutoKat = false; #autocategorisation $wgMSU_CheckedAutoKat = false; #checkbox for autocategorisation checked $wgMSU_debug = false; #debug mode $wgMSU_ImgParams = '400px'; #default max-size for inserted image $wgMSU_UseDragDrop = true; #show drag&drop area require_once "$IP/extensions/MsUpload/msupload.php"; End --------------------------------------- MsUpload require_once "$IP/extensions/msupload/msupload.php"; At line 65 in the localsettings.php file there is the following: line 64 ## Database settings line 65 $wgDBtype = "mysql"; line 66 $wgDBserver = "localhost"; line 67 $wgDBname = "mediawiki"; line 68 $wgDBuser = "root"; line 69 $wgDBpassword = ""; Any idea what I'm doing wrong?

    Read the article

  • SAN performance issues storing SQL Server tempdb on a SAN that's being backed up

    - by user42724
    I'm afraid I don't know much about SAN's so please forgive my lack of detail or technical terms. As a developer I've just completed and put on an existing production system a new application but it would appear to have tipped the scales regarding the performance of the backups being taken from the SAN. As I understand it there's a mirror of the SAN being taken usually constantly at the block-level. However, there seem to be so many new writes to the disk that the SAN mirroring/backup process can no longer keep up. I believe I've narrowed this down to SQL Servers tempdb which exists on a drive that contributes the largest portion of the problem! In fact I think tempdb has be contributing the largest portion of the issues all along regardless of my application! My question therefore is whether the tempdb should ever be mirrored or backed on the SAN and whether anyone else has gone through this sort of pain already? I'm wondering whether it's a best practise to make sure that tempdb is never mirrored on a SAN simply because any writes to it don't need to be saved. This also raises a slightly connected question - is it better to rely on SQL Servers built-in database backups tools (DB in full-recovery mode with full/differential and transaction log backups) or, as is the case with our application, SQL server is in simple recovery mode and never backed up since the SAN is mirrored and backed up? Many thanks

    Read the article

  • MAMP - Host name changes to first vhost SSL entry for project with two localhosts

    - by user1322092
    I have two projects that are a copy of each other on my Mac with MAMP. They both have SSL pages. However, whenever I hit the a secured SSL page of project 2, the base_url or host changes to project1 instead of remaining project2. I know this is an issue with the vhosts, because if I switch the order of the entries, the reverse happens. Here's my config files: /Applications/MAMP/conf/extra/httpd-ssl.conf <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj1" ServerName proj1.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> <VirtualHost _default_:443> DocumentRoot "/Applications/MAMP/htdocs/proj2" ServerName proj2.localhost:443 ErrorLog "/Applications/MAMP/Library/logs/error_log" TransferLog "/Applications/MAMP/Library/logs/access_log" SSLEngine on SSLCertificateFile "/Applications/MAMP/conf/apache/ssl/server.crt" SSLCertificateKeyFile "/Applications/MAMP/conf/apache/ssl/server.key" </VirtualHost> -------------------- cat /etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 proj1.localhost 127.0.0.1 proj2.localhost

    Read the article

  • Performance of file operations on thousands of files on NTFS vs HFS, ext3, others

    - by peterjmag
    [Crossposted from my Ask HN post. Feel free to close it if the question's too broad for superuser.] This is something I've been curious about for years, but I've never found any good discussions on the topic. Of course, my Google-fu might just be failing me... I often deal with projects involving thousands of relatively small files. This means that I'm frequently performing operations on all of those files or a large subset of them—copying the project folder elsewhere, deleting a bunch of temporary files, etc. Of all the machines I've worked on over the years, I've noticed that NTFS handles these tasks consistently slower than HFS on a Mac or ext3/ext4 on a Linux box. However, as far as I can tell, the raw throughput isn't actually slower on NTFS (at least not significantly), but the delay between each individual file is just a tiny bit longer. That little delay really adds up for thousands of files. (Side note: From what I've read, this is one of the reasons git is such a pain on Windows, since it relies so heavily on the file system for its object database.) Granted, my evidence is merely anecdotal—I don't currently have any real performance numbers, but it's something that I'd love to test further (perhaps with a Mac dual-booting into Windows). Still, my geekiness insists that someone out there already has. Can anyone explain this, or perhaps point me in the right direction to research it further myself?

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • How do I host multiple independent, secured SharePoint sites (WSS 3.0) without using Active Director

    - by Kyle Noland
    I have a SharePoint site set up on one of my networks to service Active Directory users. To be clear, this is a Windows SharePoint Services 3.0 installation running on Windows Server 2003 Standard. It is not an option to upgrade the server or SharePoint version. Management would like to create several new sites, one for each of a handful of clients. These sites will be used like "dropboxes" or FTP sites so that my company can make large files available to outside contacts, and vice versa. Here are my requirements: I do not want to have to create Active Directory accounts for each external contact. If possible, I would like to store the external usernames and passwords in a database that I can write a small GUI for so that management can handle adding their own external contacts. Each client site must be sandboxed from each other and from my main company SharePoint site. I would like to keep everything running on port 80 and be able to access the sites as either clientname.mycompany.com or www.mycompany.com/clientname If anybody has ever done this I would really appreciate hearing about any lessons you learned and suggestions for how to set this up. Kyle

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • How to disable all bounce back email in exim 4.69

    - by liame
    I have set up an email server to send out solicited newsletters. There should be no "regular" users of this server, so it is not desirable to send bounce notifications back to the recipient. Especially so since I am tracking bounces myself by parsing the log files periodically. What I want is to unconditionally prevent exim from ever sending a bounce notification email back to a sender. How can I do this? Thank you! (I accidentally posted this to superuser before posting it here, disregard that if you come across) What I want is an email server that will accept all incoming emails, deliver it accordingly (that is remotely or locally) and not send a bounce notification the sender upon bounce. I log bounces myself, in a database. The only function bounce messages have in my setting is to waste resources and bandwidth. I need to send emails fast, using exiwhat during a run, I see a significant number of deliveries to [email protected]. I could potentially increase my email productivity by 10~20% if all bounce emails are eliminated.

    Read the article

  • what can be causes of http server crash?

    - by mithunmo
    Hello , I am using WAMP server on Windows XP. Apache 2.2.11 MySQL 5.1.36 (INNODB engine) PHP 5.3.0 I observe that my WAMP server crashes in the following scenarios IF I use a Low end PC ( low processor speed and low RAM) After making some changes to httpd.conf file .For eg changing the Allow from IP address . But here it crashes only once and then it starts to work fine. Random crashes CRASH LOG szAppName : httpd.exe szAppVer : 2.2.11.0 szModName : php5ts.dll szModVer : 5.3.0.0 offset : 0000c309 C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\httpd.exe.mdmp C:\DOCUME~1\blrcom\LOCALS~1\Temp\WERc677.dir00\appcompat.txt My questions Does high CPU utilization/LOW RAM can also cause the HTTP server to crash ? excessive file reading as in every 10 seconds ? unlimited script execution time . I have set the maximum execution time in php script to 0 as my script has to execute for sometimes 2-3 days. Is there any way to avoid this ? Access to Database ? Should we use lock before reading and writing Can these be the reasons for random wamp server crashes ? OR is is some other programming error ? Please guide me . Regards, Mithun

    Read the article

  • Tools to (privately) annotate/markup a website for maintenance

    - by rob
    I've been tasked with updating a website. Rather than proofreading and updating each page (one at a time), I want to make a single pass over the entire website, marking graphics/images/videos that need to be rewritten, removed, or updated. I thought about taking screenshots, marking those up, and putting them in our bug-tracking database, but that seems like an extremely tedious solution. Some of the content is similar on various pages across the website, and the entire site itself is localized into several languages (so any changes made to the English version will have corresponding changes for other languages). I also want all of my markup to remain private (that is, if it's stored online somewhere, I should be the only person who can see my comments). I found an article that lists several website annotation services, but it's not clear whether they allow private annotations, or whether these tools are even appropriate for website maintenance (many of them look more geared toward social networking). I've started making a list of some necessary and desired features below, and may add more as necessary. Annotations/markup/comments remain private (only visible to me) Comment history/tagging (so I can reuse the same comment for shared footers, items requiring similar updates, etc.) Ability to print/export a list or report of all comments for the entire website Ability to produce a categorized list of changes (e.g., to produce a list of images that need updating, which I can send to the graphic designer) What processes and tools do you use to keep track of all the changes that need to be made to a website? What features are painfully absent from the tools you use?

    Read the article

  • How to verify PostgreSQL 9 has installed correctly on a CentOS server?

    - by A4J
    I'm trying to install the PG (postgres) gem on a CentoOS server, but it keeps saying Postgres is too old, even though I have upgraded it to 9.1.3 (as per the instructions here http://www.davidghedini.com/pg/entry/install_postgresql_9_on_centos). I am using CentOS 5.8 (and Ruby 1.9.3) Here is the error message: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... no Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. psql --version confirms my version: psql (PostgreSQL) 9.1.3 I can confirm packages installed: Setting up Install Process Package postgresql91-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-devel-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-server-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-libs-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-contrib-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Nothing to do Any ideas on how to troubleshoot this? Thanks in advance.

    Read the article

  • Can I change the user id of a user on one Linux server to match another server in /etc/passwd?

    - by user76177
    I have a Rails application that is on a virtual machine (RHEL 6) and it's database is on dedicated hardware (also RHEL 6). The app server has an NFS directory from the db server mounted and accessible. It needs to write images to that server that are uploaded via the app. Background processes on the db server need to read and write to the same directory, as they perform resizing operations on the uploaded files. Right now none of this is working, because the user ids are different between the two systems. I only need this to work for this one application, so it is way too much overhead to put an LDAP system in place. Can I simply change the user id of this one user in one of the systems, or will that cause mass chaos? UPDATE: The fix worked, at least on local devices. Unfortunately the device I have mounted to the main db server still thinks my user id is 502 instead of 506. Do I need to remount that device, or is there an NFS daemon I can stop and restart to refresh it?

    Read the article

  • Postgresql server will not start

    - by Claudiu
    I'm on Windows 7. I restarted my computer. I then tried to connect to the database and got an error. I don't remember which one in particular but it was some connection issue. I decided to try to restart the server, so I clicked on "Restart server" from the start menu. This blocked. After a few minutes I killed the process and tried again, only to get a "The service is starting or stopping. Please try again later." message. I rebooted the computer again, tried to start again, and got the same error. I killed the pg_ctl process and tried starting it manually, but that didn't work either: C:\Users\DrClaud>cscript "C:\Program Files\PostgreSQL\8.3\scripts\serverctl.vbs" start wait Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. The PostgreSQL Server 8.3 service is starting................................... ....................................... The PostgreSQL Server 8.3 service could not be started. The service did not report an error. More help is available by typing NET HELPMSG 3534. The start command returned an error (2) Any ideas?

    Read the article

  • How to enable telnet with port 3306 during Master to master replication on MySQL Server

    - by Mainio
    I am trying to do Master to Master Replication in Windows Server 2008. I am successfully able to replicate all the database of Master 1 to Master 2. But I am unable to replicate the changes made on Master 2 to Master 1. Later on I found that, I can telnet to Master 1 from Master 2 with port 3306 but I am not able on telnet from Master 1 to Master 2. When I check netstat on both Master. I found the following result. I couldn't publish my public IP so I put name as Master 1 and Master 2 for their respective IP Master 1 C:\Users\XXXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 1:3306 Master 2:61566 ESTABLISHED TCP Master 1:3389 My remote:56053 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60675 ESTABLISHED TCP 127.0.0.1:3306 Master 1:60712 ESTABLISHED TCP 127.0.0.1:60675 Master 1:3306 ESTABLISHED TCP 127.0.0.1:60712 Master 1:3306 ESTABLISHED Master 2 C:\Users\XXXX>netstat Active Connections Proto Local Address Foreign Address State TCP Master 2:3389 My remote:56124 ESTABLISHED TCP Master 2:61566 Master 1:3306 ESTABLISHED TCP Master 2:61574 bil-sc-cm02:http ESTABLISHED TCP 127.0.0.1:3306 Master 2:61562 ESTABLISHED TCP 127.0.0.1:3306 Master 2:61563 ESTABLISHED TCP 127.0.0.1:61562 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61563 Master 2:3306 ESTABLISHED TCP 127.0.0.1:61573 Master 2:3306 TIME_WAIT All shows that In my master 2, port 3306 is not activate. Now I need solution over here. How can I figure it. Your small suggestion would be million for me. Thank you Regards, Udhyan

    Read the article

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • High Sqlservr.exe Memory Usage

    - by user18576
    I have a problem with sqlservr.exe (version 2008). It use a more memory. I checked on windows taskbar manager, sqlservr.exe usage ( Mem usage - 8GB Ram). I dont know how can I fix it.Got the following metrics of the server using Perfmon: SQLServer:Buffer Manager Buffer cache hit ratio 13 SQLServer:Buffer Manager Page lookups/sec 46026128096 SQLServer:Buffer Manager Free pages 129295 SQLServer:Buffer Manager Total pages 997309 SQLServer:Buffer Manager Target pages 1053560 SQLServer:Buffer Manager Database pages 484117 SQLServer:Buffer Manager Reserved pages 0 SQLServer:Buffer Manager Stolen pages 383897 SQLServer:Buffer Manager Lazy writes/sec 384369 SQLServer:Buffer Manager Readahead pages/sec 69315446 SQLServer:Buffer Manager Page reads/sec 71280353 SQLServer:Buffer Manager Page writes/sec 12408371 SQLServer:Buffer Manager Checkpoint pages/sec 7053801 SQLServer:Buffer Manager Page life expectancy 735262 SQLServer:General Statistics Active Temp Tables 161 SQLServer:General Statistics Temp Tables Creation Rate 3131845 SQLServer:General Statistics Logins/sec 2336011 SQLServer:General Statistics Logouts/sec 2335984 SQLServer:General Statistics User Connections 27 SQLServer:General Statistics Transactions 0 SQLServer:Access Methods Full Scans/sec 34422821 SQLServer:Access Methods Range Scans/sec 2027247756 SQLServer:Access Methods Workfiles Created/sec 49771600 SQLServer:Access Methods Worktables Created/sec 28205828 SQLServer:Access Methods Index Searches/sec 4890715219 SQLServer:Access Methods FreeSpace Scans/sec 21178928 SQLServer:Access Methods FreeSpace Page Fetches/sec 21226653 SQLServer:Access Methods Pages Allocated/sec 41483279 SQLServer:Access Methods Extents Allocated/sec 4743504 SQLServer:Access Methods Extent Deallocations/sec 4806606 SQLServer:Access Methods Page Deallocations/sec 41419137 SQLServer:Access Methods Page Splits/sec 23834799 SQLServer:Memory Manager SQL Cache Memory (KB) 29160 SQLServer:Memory Manager Target Server Memory (KB) 8428480 SQLServer:Memory Manager Total Server Memory (KB) 7978472 Some body could help me please.And I really want to know the cause for the above.

    Read the article

  • Cassandra Remote Connection

    - by Lyuben Todorov
    I'm not managing to connect to cassandra from outside machines. The database is hosted on a windows machine and im trying to connect through a mac (but this shouldn't cause problems) Local connection works: C:\cassandra\bin>cassandra-cli Starting Cassandra Client Connected to: "Test Cluster" on 127.0.0.1/9160 Welcome to Cassandra CLI version 1.1.6 But fails from other machines on the same network bin/cassandra-cli --host 192.168.0.10 --port 9160 org.apache.thrift.transport.TTransportException: java.net.ConnectException: Operation timed out at org.apache.thrift.transport.TSocket.open(TSocket.java:183) at org.apache.thrift.transport.TFramedTransport.open(TFramedTransport.java:81) at org.apache.cassandra.cli.CliMain.connect(CliMain.java:70) at org.apache.cassandra.cli.CliMain.main(CliMain.java:246) Exception connecting to 192.168.0.10/9160. Reason: Operation timed out. Welcome to Cassandra CLI version 1.2.0-beta3 Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit. There is a router on the network but these ports have been triggred Ports: 1024, 7000, 7001, 7199, 9160 And the same ports were forwarded to 192.168.0.10 (where Cassandra is hosted) Cassandra version is 1.0.7 And the settings I think i need to change in cassandra.yaml listen_address: 192.168.0.10 rpc_address: I'm not really sure if I've missed any steps. Any help would be appreciated.

    Read the article

  • MySQL tmpdir on /dev/shm with SELinux

    - by smorfnip
    On RHEL5, I have a small MySQL database that has to write temp files. To speed up this process, I would like to move the temporary directory to /dev/shm by putting the following line into my.cnf: tmpdir=/dev/shm/mysqltmp I can create /dev/shm/mysqltmp just fine and do chown mysql:mysql /dev/shm/mysqltmp chcon --reference /tmp/ /dev/shm/mysqltmp I've tried to make SELinux happy by applying the same settings that are in effect for /tmp/ (and /var/tmp/), which is presumably where MySQL is writing its tmp files if tmpdir is undefined. The problem is that SELinux complains about MySQL having access to that directory. I get the following in /var/log/messages: SELinux is preventing mysqld (mysqld_t) "getattr" to /dev/shm (tmpfs_t). SELinux is a hard mistress. Details: Source Context root:system_r:mysqld_t Target Context system_u:object_r:tmpfs_t Target Objects /dev/shm [ dir ] Source mysqld Source Path /usr/libexec/mysqld Port <Unknown> Host db.example.com Source RPM Packages mysql-server-5.0.77-3.el5 Target RPM Packages Policy RPM selinux-policy-2.4.6-255.el5_4.1 Selinux Enabled True Policy Type targeted MLS Enabled True Enforcing Mode Enforcing Plugin Name catchall_file Host Name db.example.com Platform Linux db.example.com 2.6.18-164.2.1.el5 #1 SMP Mon Sep 21 04:37:42 EDT 2009 x86_64 x86_64 Alert Count 46 First Seen Wed Nov 4 14:23:48 2009 Last Seen Thu Nov 5 09:46:00 2009 Local ID e746d880-18f6-43c1-b522-a8c0508a1775 ls -lZ /dev/shm shows drwxrwxr-x mysql mysql system_u:object_r:tmp_t mysqltmp and permissions for /dev/shm itself are drwxrwxrwt root root system_u:object_r:tmpfs_t shm I've also tried chcon -R -t mysqld_t /dev/shm/mysqltmp and setting the group on /dev/shm to mysql with no better results. Shouldn't it be enough to tell SELinux, hey, this is a temp directory just like MySQL was using before? Short of turning off SELinux, how do I make this work? Do I need to edit SELinux policy files?

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • How could Google Latitude find my exact PC location with no GPS or public wifi?

    - by Mike
    I found a similar question here but I still don't get it. You see, I live in a small town and every time I check my IP location via online services or speed test websites, my location appears to be my ISP server location (which in my case is 250 miles away). But when I tried Google latitude, it pinpointed my exact location within less than 100 meters! I use Windows Vista, Google Chrome, and when I got the message that "Google is trying to locate you", I agreed just to check what the result will be. It was scary, very scary! What I've come up after reading the above link is that Google have a kind of extensive WiFi database locations. That could be understandable with the case of public and open WiFis that are used with a lot of people. Some of them might be using applications that could gather location data and somehow this information ends up in giant Google databases. From those, Google could pinpoint a WiFi location based on its MAC address along with these bits of info that have been gathered via various sources. The issue here is that my WiFi is private, I don't even broadcast my WiFi name. So how on earth did Google find my exact PC location? Please break down the answer in layman's terms as possible.

    Read the article

< Previous Page | 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133  | Next Page >