Search Results

Search found 30300 results on 1212 pages for 'temporal database'.

Page 525/1212 | < Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >

  • Powershell Exchange script returning inconsistent results - PS weirdness

    - by Ian
    Maybe someone can shed some light on a bit of Powershell weirdness I've come across and can't explain. This PS script returns a list of exchange databases, their sizes and the number of mailboxes in each one: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2)}}, @{Name="Size (MB)";Expression={$objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$"+ $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2)}}, @{Name="No. Of Mbx";expression={(Get-Mailbox -Database $_.Identity | Measure-Object).Count}} | Format-table –AutoSize If I add a simple 'sort name' before the 'format-table' my resulting table contains blanks where the database sizes and number of mailboxes should appear (not zeros, blank empty space).... but only in some rows, not all rows. Some rows contain numbers! If I put the '|sort name| ' after the initial 'get-mailboxdatabase' it works fine. Whats even weirder is if I do the following: Execute the above command Add the sort before format-table Execute the new command Execute the initial command again What I see is different amounts in each of the three cases - all of which are incorrect. Yet 1 and 3 are the same command and the only difference with 2 is a sort. 1 and 3 should, at a minimum, return the same results. I get blanks where I should have MBs If I add the sort after the get-mailboxdatabase, it always returns identical results (as it should). Can anyone suggest an explanation as to what may be going on? If its of any help in reading the expression, I've reformatted it here to make it a bit more readable: Get-MailboxDatabase | Select Server, StorageGroupName, Name, @{Name="Size (GB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1048576KB; [math]::round($size, 2) }}, @{Name="Size (MB)";Expression={ $objitem = (Get-MailboxDatabase $_.Identity); $path = "`\`\" + $objitem.server + "`\" + $objItem.EdbFilePath.DriveName.Remove(1).ToString() + "$" + $objItem.EdbFilePath.PathName.Remove(0,2); $size = ((Get-ChildItem $path).length)/1024KB; [math]::round($size, 2) }}, @{Name="No. Of Mbx";expression={ (Get-Mailbox -Database $_.Identity | Measure-Object).Count }} | Format-table –AutoSize

    Read the article

  • SQL 2000 and group names

    - by Nasa
    I have a SQL 2000 server which has databases, under user section of the database object, I have some NT 4.0 groups. These groups were migrated over to Active Directory some time ago using ADMT with SID history. The original source domain groups have since been deleted. The access shown is olddomain\groupname. I don't know why, if they were ntfs permissions they would update automatically to target\groupname. The users in the AD domain still have access to the database as they are a member of the migrated group (Target\groupname). I was wondering 1) Why does the old group (source\groupname) show up as it doesn't exist anymore. But access is still granted to the target group? 2) Is there any easy way to update the group name from source\groupname to target\groupname? Thanks for any help.

    Read the article

  • ORA-1034 Oracle Not Found

    - by Venkata
    I have recently installed oracle 11g on Windows 7 Home Premium Operating System. I logged in as administrator into windows. I am trying to create a database using DBCA wizard. I started the wizard as administrator. Everything goes fine until last step. However when creating the database the creation fails with ORA-01034 Oracle Not Available error. All my oracle services looks up and running. I consulted the documentation and forums. I have got no clue why I am running into the error.

    Read the article

  • Building a Data Warehouse

    - by Paul
    I've seen tutorials articles and posts on how to build datawarehouses with star and snowflakes schemas, denormalization of OLTP databases fact and dimension tables and so on. Also seen comments like: Star schemas are for datamarts, at best. There is absolutely no way a true enterprise data warehouse could be represented in a star schema, or snowflake either. I want to create a database that will server for reporting services and maybe (if that isn't enough) install analisys services and extract reports and data from cubes. My question was : Is it really necesarry to redesign my current database and follow the star/snowflake schemas with fact and dimension tables ? Thank you

    Read the article

  • How to generate the right password format for Apache2 authentication in use with DBD and MySQL 5.1?

    - by Walkman
    I want to authenticate users for a folder from a MySQL 5.1 database with AuthType Basic. The passwords are stored in plain text (they are not really passwords, so doesn't matter). The password format for apache however only allows for SHA1, MD5 on Linux systems as described here. How could I generate the right format with an SQL query ? Seems like apache format is a binary format with a lenght of 20, but the mysql SHA1 function return 40 long. My SQL query is something like this: SELECT CONCAT('{SHA}', BASE64_ENCODE(SHA1(access_key))) FROM user_access_keys INNER JOIN users ON user_access_keys.user_id = users.id WHERE name = %s where base64_encode is a stored function (Mysql 5.1 doesn't have TO_BASE64 yet). This query returns a 61 byte BLOB which is not the same format that apache uses. How could I generate the same format ? You can suggest other method for this too. The point is that I want to authenticate users from a MySQL5.1 database using plain text as password.

    Read the article

  • MySQL on Windows - how do I set the wait_timeout for connections using named pipes?

    - by gustafc
    I use a MySQL database running on a Windows box, and for performance reasons I'm connecting to it using named pipes. The (Java) application using the database (through Hibernate) can let the connection lie idle for quite a long time, which causes the connection to fail with the following message: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 33 558 297 milliseconds ago. The last packet sent successfully to the server was 33 558 297 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. autoReconnect unfortunately has no effect (and neither does autoReconnectForPools), but the wait_timeout docs state that wait_timeout only applies "to TCP/IP and Unix socket file connections, not to connections made via named pipes, or shared memory". How can I change the wait_timeout for named pipes?

    Read the article

  • Monit can't detect MySQL, but I can

    - by Matchu
    Monit is configured to watch MySQL on localhost at port 3306. check process mysqld with pidfile /var/lib/mysql/li175-241.pid start program = "/etc/init.d/mysql start" stop program = "/etc/init.d/mysql stop" if failed port 3306 protocol mysql then restart if 5 restarts within 5 cycles then timeout My application, which is configured to connect to MySQL via localhost:3306, is running just fine and can access the database. I can even use MySQL Query Browser to connect to the database remotely via port 3306. The port is totally open and possible to connect to. Therefore, I'm pretty darn certain that it's running. However, running monit -v reveals that Monit cannot detect MySQL on that port. 'mysqld' failed, cannot open a connection to INET[localhost:3306] via TCP This happens consistently, until Monit decides not to track MySQL anymore, as configured. How can I begin to troubleshoot this issue?

    Read the article

  • Rolling Back Microsoft CRM during testing

    - by npeterson
    Process related question: Currently we have a multi-tenant installation of MS CRM 4.0 on three servers, Dev, Test, and Live. We are actively working on customizing one of the tenants, but the others are static. During user testing, we often find it necessary to 'start fresh' in one of the tenants. Is it better to try and delete out the changes from the tenant (created accounts, leads, etc), or just revert the database to a backup from before the testing started? Is there compelling reasons why bulk delete is not advisable for MSCRM or that reverting the database frequently could cause issue?

    Read the article

  • multiple file systems for mysql

    - by RainDoctor
    Does mysql support multiple file systems for a single database with most of the tables being on MyISAM? Context: we have a 1.5TB mysql database, which is increasing at the rate of 200GB per month. The storage is directly attached, whose slots are almost full. I can add another DAS, and increase the file system. But resizing volume, resizing file system, etc are getting messy. Is there a concept of "tablespace, datafile" (like in oracle) in MySql world? Or how you guys manage mysql db with these kind of constraints?

    Read the article

  • weird postgresql log entries

    - by hyperboreean
    I am trying to figure out why I get some weird entries in my postgresql log after I do a restart: 2010-05-14 11:30:25 EEST LOG: database system was shut down at 2010-05-14 11:30:22 EEST 2010-05-14 11:30:25 EEST LOG: autovacuum launcher started 2010-05-14 11:30:25 EEST LOG: database system is ready to accept connections 2010-05-14 11:30:25 EEST LOG: incomplete startup packet 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress 2010-05-14 11:30:40 EEST LOG: could not receive data from client: Connection reset by peer 2010-05-14 11:30:40 EEST LOG: unexpected EOF on client connection First, there's the 2010-05-14 11:30:25 EEST LOG: incomplete startup packet which bugs me. Anyone has any idea why this happens? And also, this one is very strange: 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress ...

    Read the article

  • How can Windows defragmentation tools cause internal fragmentation in SQL Server?

    - by Martin
    I was just reading this article where the author talks about the file system fragmentation that can be caused by growing database files. There was one bit that I didn't quite follow. What about Windows defragmentation tools? Although you can use a Windows defragmentation tool to defragment your database files, these tools simply move chunks of files around to get them contiguous. This moving of chunks of files can cause internal fragmentation that you might not be able to resolve easily. Is the author saying here that the disc defragmenter makes no attempt to put the chunks of files in the correct sequence or have I misunderstood? If he is saying that then is this a limitation of all disc defragmenter utilities - even commercial ones?

    Read the article

  • MySQL mistake with grant option

    - by John Tate
    I am unsure reading the MySQL documentation if creating a user with the GRANT option will give them the power to create users and grant privileges, or change the privileges of other users databases. I have been creating databases for users like this CREATE DATABASE user; USE user; GRANT ALL PRIVILEGES ON *.* TO 'user'@'localhost' IDENTIFIED BY 'password' WITH GRANT OPTION; Is this the best way of doing it or have I just given my users too much control? They are people I am hosting sites for. Thankfully at this point they are trustworthy. I use quotas. Edit: I have realized I have been granting users access to all databases. This is obviously stupid I should be using this: GRANT ALL PRIVILEGES ON database.* to 'user'@localhost' IDENTIFIED BY 'password' What is the simplest way to revoke privileges for every user except root so I can quickly end this catastrophic rookie mistake?

    Read the article

  • Forwarding ports with ssh on Linux

    - by Patrick Klingemann
    I have a database server, let's call it: dbserver I have a web server with access to my dbserver, let's call it: webserver I have a development machine that I'd like to use to access a database on dbserver, let's call it: dev dbserver has a firewall rule set to allow TCP requests from webserver to dbserver:1433 I'd like to set up a tunnel from dev:1433 to dbserver:1433, so all requests to 1433 on dev are passed along to dbserver:1433 My sshd_config on webserver has the following rules set: AllowTcpForwarding yes GatewayPorts yes This is what I've tried: ssh -v -L localhost:1433:dbserver:1433 webserver In another terminal: telnet localhost 1433 Results in: Trying ::1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. Any idea what I'm doing wrong here? Thanks in advance!

    Read the article

  • SQL Server Replication Agent priority

    - by Wikser
    Every hour a server replicates SQL server data with some external web server. During this time, which takes about 2-5minutes, the database seriously slows down. Colleagues, which work with the front end applications of that on another terminal server, even regularly start complaining. The databases are also synchroniously mirrored (via SQLServer mirroring, no replication) to a third server. Note that 99% of the data is replicated outgoing, so the server should rarely need to update its data. As the (merge and transactional) replication tasks are not time-critical, I would like to reduce their priority or somehow slow them down, so they don't affect the database performance that much. How would you implement that?

    Read the article

  • Exchange Server Contact Categories - How to Remove/Update All

    - by ben
    I've been tasked with cleaning up our companies contact database sitting on our Exchange Server 2003. The rub is this database of contacts has been neglected for the past couple of years and is now a bit messy. My issue is I have say a person named 'Bob Smith', and Bob is listed in multiple categories, 'Students' and 'Student'. I would really like to remove the 'Student' category from Bob and anyone else out there that has the same category. My question is, is there an easy way to edit the master category lists for contacts on the exchange server? I feel like I am missing something simple here since if I were playing with the categories that I use, to say organize email its very easy to do so, but I can't seem to find the proper way to do it for categories that are up on the server. I'm attempting to work through Outlook 2007 and Exchange 2003. Any insight would be very helpful as I really don't want to change 8000+ contacts by hand.

    Read the article

  • ldap sync with outlook

    - by Dr Casper Black
    I have a task to research the possibilities of LDAP as a centralized Address Book. I have setup a openLDAP on debian 5.07. I managed to search the LDAP contacts from MS Outlook 2007 (with some drawbacks like Outlook cant recognize street and organization fields). My question is, is it possible ,& how, to sync data on LDAP server with applications that support LDAP? I could not find any data on this topic. EDIT: The point is, to have a centralized list of contacts that can be sync with various applications, for instance Outlook, Thunderbird, Phonebook on mobile phones...etc The Question is, Is it possible to transfer (update) data on a client application from LDAP database and viceversa? So not to search LDAP server data, but to download contact that are not available in the client application (Outlook) and upload data to LDAP if the contact is in (Outlook) and not in LDAP database and the other way around, in other words synchronize.

    Read the article

  • How do large blobs affect SQL delete performance, and how can I mitigate the impact?

    - by Max Pollack
    I'm currently experiencing a strange issue that my understanding of SQL Server doesn't quite mesh with. We use SQL as our file storage for our internal storage service, and our database has about half a million rows in it. Most of the files (86%) are 1mb or under, but even on fresh copies of our database where we simply populate the table with data for the purposes of a test, it appears that rows with large amounts of data stored in a BLOB frequently cause timeouts when our SQL Server is under load. My understanding of how SQL Server deletes rows is that it's a garbage collection process, i.e. the row is marked as a ghost and the row is later deleted by the ghost cleanup process after the changes are copied to the transaction log. This suggests to me that regardless of the size of the data in the blob, row deletion should be close to instantaneous. However when deleting these rows we are definitely experiencing large numbers of timeouts and astoundingly low performance. In our test data set, its files over 30mb that cause this issue. This is an edge case, we don't frequently encounter these, and even though we're looking into SQL filestream as a solution to some of our problems, we're trying to narrow down where these issues are originating from. We ARE performing our deletes inside of a transaction. We're also performing updates to metadata such as file size stats, but these exist in a separate table away from the file data itself. Hierarchy data is stored in the table that contains the file information. Really, in the end it's not so much what we're doing around the deletes that matters, we just can't find any references to low delete performance on rows that contain a large amount of data in a BLOB. We are trying to determine if this is even an avenue worth exploring, or if it has to be one of our processes around the delete that's causing the issue. Are there any situations in which this could occur? Is it common for a database server to come to the point of complete timeouts when many of these deletes are occurring simultaneously? Is there a way to combat this issue if it exists? (cross-posted from StackOverflow )

    Read the article

  • How to upgrade a single instance's size without downtime

    - by Justin Meltzer
    I'm afraid there may not be a way to do this since we're not load balancing, but I'd like to know if there is any way to upgrade an EC2 EBS backed instance to a larger size without downtime. First of all, we have everything on one instance: both our app and our database (mongodb). This is along the lines i'm thinking: I know you can create snapshots of your EBS and an AMI of your instance. We already have an AMI and we create hourly snapshots. If I spin up a new separate instance of a larger size and then implement (not sure what the right term is here) the snapshots so that our database is up to date, then I could switch the A record of our domain from the old ip address to the new one. However, I'm afraid that after copying over the data from the snapshot, by the time it takes to change the A record and have that change propagate, the data could potentially be stale. Is there a way to prevent this, and is there a better way to do this than I am suggesting?

    Read the article

  • Problem in Installing Wordpress

    - by Hajloo
    I try to install Wordpress in a Windows Client with WebPI which provided by Microsoft. I had tostop installation process 3 time and installing PHP and mysql Extention manually. but everytime I continue setup by WebPi andfinally it show me a success message. But when I try to see installed wordpress in my client I see this Your PHP installation appears to be missing the MySQL extension which is required by WordPress. I asked it in StackOverFlow here but I couln't get the right answer. I install everything in **C:\Program files\** so these are the location C:\Program Files\MySQL\MySQL Server 5.1 C:\Program Files\Php C:\Program Files\ext mysql root password: admin wordpress database : wordpress wordpress database password : 123 here is my php.ini

    Read the article

  • SQL Server Management Studio - Error connecting to remote DB

    - by Julien Poulin
    All right, here is the deal: I'm connecting to a Windows 2003 Server using VPN. On this server, there is a remote SQL Server 2005 Express engine. I can connect to the database using Visual Studio 2008. What I can't do though, is connect to this same database with SQL Server 2005 Management Studio (Standard). I have checked the connection info a hundred times and still nothing. One thought: do VS ans SSMS use the same sql provider? Note: I'm running Windows 7 RC. I had absolutely no problem using the same config under Vista. This is the error I get when trying to connect with SSMS:

    Read the article

< Previous Page | 521 522 523 524 525 526 527 528 529 530 531 532  | Next Page >